FedBoost: A Communication-Efficient Algorithm for Federated Learning
Authors: Jenny Hamer, Mehryar Mohri, Ananda Theertha Suresh
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the efficacy of FEDBOOST for density estimation under various communication budgets. We compare three methods: no communication-efficiency (no sampling): γk,t = 1 k, t, uniform sampling: γ = C q , and weighted random sampling: γk,t αk,t C. For simplicity, we assume all clients participate during each round of federated training. 5.1. Synthetic dataset 5.2. Shakespeare corpus |
| Researcher Affiliation | Collaboration | 1Google Research, New York, NY, USA 2Courant Institute of Mathematical Sciences, New York, NY, USA. |
| Pseudocode | Yes | Figure 1. Pseudocode of the FEDBOOST algorithm. Figure 2. Pseudocode of the AFLBOOST algorithm. |
| Open Source Code | No | The paper does not provide any statement about making its source code openly available or provide a link to a code repository. |
| Open Datasets | Yes | Shakespeare Tensor Flow Federated Shakespeare dataset |
| Dataset Splits | No | The paper mentions using a 'Synthetic dataset' and the 'Shakespeare Tensor Flow Federated Shakespeare dataset' but does not specify the percentages or counts for training, validation, or test splits. It refers to the 'Shakespeare corpus' as having 'p = 715 characters'. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | For fairness of evaluation, we fix the step size η to be 0.001 and number of rounds for both sampling methods and communication constraints, though note that this is not the ideal step size across all values of C and more optimal losses may be achieved with more extensive hyperparameter tuning. We set C = p/2 and use η = 0.01. |