Federated Learning as Variational Inference: A Scalable Expectation Propagation Approach
Authors: Han Guo, Philip Greengard, Hongyi Wang, Andrew Gelman, Yoon Kim, Eric Xing
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct an extensive empirical study across various algorithmic considerations and describe practical strategies for scaling up expectation propagation to the modern federated setting. We apply Fed EP on standard federated learning benchmarks and find that it outperforms strong baselines in terms of both convergence speed and accuracy. |
| Researcher Affiliation | Collaboration | Carnegie Mellon University, Massachusetts Institute of Technology, Columbia University Mohamed bin Zayed University of Artificial Intelligence, Petuum Inc. |
| Pseudocode | Yes | Algorithm 1 Federated Learning as Inference |
| Open Source Code | Yes | Code: https://github.com/Han Guo97/expectation-propagation. This work was completed while Han Guo was a visiting student at MIT. |
| Open Datasets | Yes | We use the dataset preprocessing provided in Tensor Flow Federated (TFF, Authors, 2018) |
| Dataset Splits | Yes | Table 1: Model and dataset statistics. ... Clients (train/test) |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions using 'Tensor Flow Federated (TFF, Authors, 2018)' and implementing models in 'Jax (Bradbury et al., 2018)' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | We refer the reader to the appendix for the exact experimental setup. ... Table 7: Hyperparameters. ... Server Learning Rate 0.5 ... Client Learning Rate 0.01 ... Client Epochs 10 |