Debiasing Model Updates for Improving Personalized Federated Training

Authors: Durmus Alp Emre Acar, Yue Zhao, Ruizhao Zhu, Ramon Matas, Matthew Mattina, Paul Whatmough, Venkatesh Saligrama

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically evaluate the performance of our method on benchmark datasets and demonstrate significant communication savings. We also perform extensive experiments to empirically evaluate our method on real world datasets, and show that our method significantly outperforms prior works.
Researcher Affiliation Collaboration 1Boston University, Boston, MA 2Arm ML Research Lab, Boston, MA.
Pseudocode Yes Algorithm 1 Personalized Federated Learning, PFL; Algorithm 2 PFL Subroutines
Open Source Code No The paper does not provide a concrete access link or an explicit statement about the release of its own source code.
Open Datasets Yes We use popular datasets with standard train/test splits such as CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009).
Dataset Splits Yes We use popular datasets with standard train/test splits such as CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009).
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No We implement methods in Py Torch framework (Paszke et al., 2019) and use Higer library (Grefenstette et al., 2019) for MAML adaptation. The paper mentions software names but does not provide specific version numbers for a reproducible description of ancillary software.
Experiment Setup Yes Input: T, w1, g1 i = g1 = 0, K, β, α for t = 1, 2, . . . T do... Set customized model wt+1 i,k = Ti(wt+1 i,k , Dk i ), Update meta model wt+1 i,k+1=wt+1 i,k β fi(wt+1 i,k , Dk i )+ Rt i(wt+1 i,k ) where β is the learning rate. We refer to Appendix A.1 for additional experimental details.