A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging
Authors: Shiqiang Wang, Mingyue Ji
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results also verify the advantage of Fed AU over baseline methods with various participation patterns. |
| Researcher Affiliation | Collaboration | Shiqiang Wang IBM T. J. Watson Research Center Yorktown Heights, NY 10598 wangshiq@us.ibm.com Mingyue Ji Department of ECE, University of Utah Salt Lake City, UT 84112 mingyue.ji@utah.edu |
| Pseudocode | Yes | Algorithm 1: Fed Avg with pluggable aggregation weights |
| Open Source Code | Yes | The code for reproducing our experiments is available via the following link: https://shiqiang.wang/code/fedau |
| Open Datasets | Yes | We consider four image classification tasks, with datasets including SVHN (Netzer et al., 2011), CIFAR-10 (Krizhevsky & Hinton, 2009), CIFAR-100 (Krizhevsky & Hinton, 2009), and CINIC-10 (Darlow et al., 2018) |
| Dataset Splits | No | The paper mentions training and test data, but does not specify the exact split percentages or a dedicated validation set split for reproduction. |
| Hardware Specification | Yes | The experiments were split between a desktop machine with RTX 3070 GPU and an internal GPU cluster. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies. |
| Experiment Setup | Yes | The grid for the local step size γ is {10 2, 10 1.75, 10 1.5, 10 1.25, 10 1, 10 0.75, 10 0.5} and the grid for the global step size η is {100, 100.25, 100.5, 100.75, 101, 101.25, 101.5}. To reduce the complexity of the search, we first search for the value of γ with η = 1, and then search for η while fixing γ to the value found in the first search. We consider the training loss at 500 rounds for determining the best γ and η. |