FedExP: Speeding Up Federated Averaging via Extrapolation
Authors: Divyansh Jhunjhunwala, Shiqiang Wang, Gauri Joshi
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that Fed Ex P consistently converges faster than Fed Avg and competing baselines on a range of realistic FL datasets. |
| Researcher Affiliation | Collaboration | 1Carnegie Mellon University, 2IBM Research |
| Pseudocode | Yes | Algorithm 1 Proposed Algorithm: Fed Ex P |
| Open Source Code | Yes | Our code is available at the following link https://github.com/Divyansh03/Fed Ex P. |
| Open Datasets | Yes | For realistic FL tasks, we consider image classification on the following datasets i) EMNIST (Cohen et al., 2017), ii) CIFAR-10 (Krizhevsky et al., 2009), iii) CIFAR-100 (Krizhevsky et al., 2009), iv) CINIC-10 (Darlow et al., 2018). |
| Dataset Splits | No | For EMNIST we use the federated version of EMNIST available at Caldas et al. (2019), which is naturally partitioned into 3400 clients. The number of training and test samples is 671,585 and 77,483 respectively. For CIFAR-10/100...the number of training examples and test examples is 50,000 and 10,000 respectively. The paper clearly defines training and test splits, but not a separate validation split for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud computing instances used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers, such as 'Python 3.8' or 'PyTorch 1.9'. |
| Experiment Setup | Yes | For our baselines, we find the best performing ηg and ηl by grid-search tuning. For Fed Ex P we optimize for ϵ and ηl by grid search. We fix the number of participating clients to 20, minibatch size to 50 and number of local updates to 20 for all experiments. In Appendix D, we provide additional details and results, including the best performing hyperparameters... |