Distributionally Robust Federated Averaging
Authors: Yuyang Deng, Mohammad Mahdi Kamani, Mehrdad Mahdavi
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We give corroborating experimental evidence for our theoretical results in federated learning settings. |
| Researcher Affiliation | Academia | Yuyang Deng Mohammad Mahdi Kamani Mehrdad Mahdavi The Pennsylvania State University {yzd82,mqk5591,mzm616}@psu.edu |
| Pseudocode | Yes | Algorithm 1: Distributionally Robust Federated Averaging (DRFA) |
| Open Source Code | Yes | The code repository used for these experiments can be found at: https://github.com/MLOPTPSU/TorchFed/ |
| Open Datasets | Yes | We use three datasets, namely, Fashion MNIST [48], Adult [1], and Shakespeare [4] datasets. |
| Dataset Splits | No | The paper mentions using 'test accuracies' and 'training' but does not provide specific details on train/validation/test dataset splits, percentages, or methodology for partitioning the data. |
| Hardware Specification | Yes | We implement our algorithm based on Distributed API of Py Torch [41] using MPI as our main communication interface, and on an Intel Xeon E5-2695 CPU with 28 cores. |
| Software Dependencies | No | We implement our algorithm based on Distributed API of Py Torch [41] using MPI as our main communication interface (PyTorch is mentioned but its specific version number is not provided, nor are specific versions for MPI or Python). |
| Experiment Setup | Yes | We use different synchronization gaps of τ {5, 10, 15}, and set η = 0.1 and γ = 8 10 3. [...] The batch size is 50 and synchronization gap is τ = 10. We set η = 0.1 for all algorithms, γ = 8 10 3 for DRFA and AFL, and q = 0.2 for q-Fed Avg. |