Efficient Federated Learning against Heterogeneous and Non-stationary Client Unavailability
Authors: Ming Xiang, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong, Lili Su
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We corroborate our analysis with numerical experiments over diversified client unavailability dynamics on real-world data sets. |
| Researcher Affiliation | Academia | 1Northeastern University, Boston, MA 2Carnegie Mellon University, Pittsburgh, PA |
| Pseudocode | Yes | Algorithm 1: Fed AWE |
| Open Source Code | Yes | The code for reproducing our experiments is available at https://github.com/mingxiang12/Fed AWE. |
| Open Datasets | Yes | The image classification tasks use CNNs and are based on SVHN [37], CIFAR-10 [26] and CINIC-10 [12] data sets. |
| Dataset Splits | No | The paper mentions 'train images' and 'test images' with their counts for SVHN, CIFAR-10, and CINIC-10 datasets. It also states that learning rates are 'searched, based on the best performance after 500 global rounds', implying a validation process, but does not explicitly define a separate 'validation' split or its size. |
| Hardware Specification | Yes | The simulations are performed on a private cluster with 64 CPUs, 500 GB RAM and 8 NVIDIA A5000 GPU cards. |
| Software Dependencies | Yes | We code the experiments based on Py Torch 1.13.1 [40] and Python 3.7.16. |
| Experiment Setup | Yes | Table 6 specifies details of the structures of the convolutional neural network and training. ... The initial local learning rate η0 and the global learning rate ηg are searched, based on the best performance after 500 global rounds, over two grids {0.1, 0.05, 0.01, 0.005, 0.001, 0.0005} and {0.5, 1, 1.5, 5, 10, 50}, respectively. |