On the Convergence of Federated Averaging with Cyclic Client Participation
Authors: Yae Jee Cho, Pranay Sharma, Gauri Joshi, Zheng Xu, Satyen Kale, Tong Zhang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Experimental Results Setup. We train ML models on standard datasets using Fed Avg with Cy CP for different client local updated procedures to see how cyclicity affects the performance of FL. We experiment with image classification using an MLP for the FMNIST (Xiao et al., 2017) dataset and EMNIST dataset (Cohen et al., 2017)... |
| Researcher Affiliation | Collaboration | 1Carnegie Mellon University, USA 2Google Research, USA 3Hong Kong University of Science and Technology, Hong Kong. |
| Pseudocode | Yes | Algorithm 1 Cy CP Framework in FL |
| Open Source Code | Yes | The code used for all experiments is included in the supplementary material. |
| Open Datasets | Yes | We experiment with image classification using an MLP for the FMNIST (Xiao et al., 2017) dataset and EMNIST dataset (Cohen et al., 2017) with 62 labels where we have 100 and 500 clients in total and select 5 and 10 clients per communication round respectively. |
| Dataset Splits | Yes | For all experiments, the data is partitioned to 80%/10%/10% for training/validation/test data, where the training data then is again partitioned across the clients heterogeneously. |
| Hardware Specification | Yes | All experiments are conducted on clusters equipped with one NVIDIA Titan X GPU. |
| Software Dependencies | Yes | The algorithms are implemented in Py Torch 1. 11. 0. |
| Experiment Setup | Yes | Specifically, we do a grid search over the learning rate: η {0.05, 0.01, 0.005, 0.001}, batch size: b {32, 64, 128}, and local iterations: τ {5, 10, 30, 50} to find the hyper-parameters with the highest test accuracy for each benchmark. |