A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs
Authors: Yan Sun, Li Shen, Dacheng Tao
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on several classical FL setups to validate the effectiveness of our proposed method. |
| Researcher Affiliation | Academia | Yan Sun The University of Sydney ysun9899@uni.sydney.edu.au Li Shen Shenzhen Campus of Sun Yat-sen University mathshenli@gmail.com Dacheng Tao Nanyang Technological University dacheng.tao@ntu.edu.sg |
| Pseudocode | Yes | Algorithm 1 A-Fed PD Algorithm |
| Open Source Code | Yes | We submit our code demo to reproduce the experiments and all hyperparameters can be found in our paper. ... We submit the code demo to reproduce the experiments. |
| Open Datasets | Yes | We follow previous work to test the performance of benchmarks on the CIFAR-10 / 100 dataset Krizhevsky et al. [2009]. |
| Dataset Splits | No | The paper states 'The total dataset of both contain 50,000 training samples and 10,000 test samples of 10 / 100 classes.' for CIFAR-10/100, providing training and test set sizes, but does not explicitly detail a separate validation split or its size/percentage. |
| Hardware Specification | Yes | Hardware: NVIDIA Ge Force RTX 2080 Ti |
| Software Dependencies | Yes | Platform: Pytorch 2.0.1 Cuda: 11.7 |
| Experiment Setup | Yes | In each setup, for a fair comparison, we freeze the most of hyperparameters for all methods. We fix total communication rounds T = 800 except for the ablation studies. ... Table 3: Hyperparameters selections of benchmarks. |