Communication-Efficient Adaptive Federated Learning
Authors: Yujia Wang, Lu Lin, Jinghui Chen
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on various benchmarks verify our theoretical analysis. and In this section, we present empirical validations toward the effectiveness of our proposed algorithms. |
| Researcher Affiliation | Academia | 1College of Information Sciences and Technology, Pennsylvania State University, State College, PA, United States 2Department of Computer Science, University of Virginia, Charlottesville, VA, United States. |
| Pseudocode | Yes | Algorithm 1 Fed AMS and Algorithm 2 Fed CAMS |
| Open Source Code | No | The paper does not contain any explicit statements about open-sourcing the code or providing a link to a code repository. |
| Open Datasets | Yes | We test all federated learning baselines, including ours on CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009) |
| Dataset Splits | No | The paper specifies training parameters like partial participation ratio (0.1), local epochs (3), and batch size (20), and evaluates on 'test accuracy', but does not explicitly describe a separate validation dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details with version numbers, such as libraries or frameworks used. |
| Experiment Setup | Yes | We set in total 100 clients for all federated training experiments. We set the partial participation ratio as 0.1, i.e., in each round, the server picks 10 out of 100 clients to participate in the communication and model update. In each round, the client will perform 3 local epochs of local training with batch size 20. and For adaptive federated optimization methods, we set β1 = 0.9, β2 = 0.99. For Fed Adam, Fed Yogi, and Fed AMSGrad, we search the best ϵ from {10-8, 10-4, 10-3, 10-2, 10-1, 100}. For Fed AMS and Fed CAMS, we search the max stabilization ϵ from {10-8, 10-4, 10-3, 10-2, 10-1, 100}. |