Personalized Federated Learning with Moreau Envelopes
Authors: Canh T. Dinh, Nguyen Tran, Josh Nguyen
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we empirically evaluate the performance of pFedMe using both real and synthetic datasets that capture the statistical diversity of clients data. We show that pFedMe outperforms the vanilla FedAvg and a meta-learning based personalized FL algorithm Per-FedAvg in terms of convergence rate and local accuracy. |
| Researcher Affiliation | Academia | 1The University of Sydney, Australia tdin6081@uni.sydney.edu.au, nguyen.tran@sydney.edu.au 2The University of Melbourne, Australia tuandungn@unimelb.edu.au |
| Pseudocode | Yes | Algorithm 1 pFedMe: Personalized Federated Learning using Moreau Envelope Algorithm |
| Open Source Code | Yes | The code and datasets are available online1. 1https://github.com/Charlie Dinh/pFedMe |
| Open Datasets | Yes | We consider a classification problem using both real (MNIST) and synthetic datasets. MNIST [51] is a handwritten digit dataset containing 10 labels and 70,000 instances. |
| Dataset Splits | Yes | All datasets are split randomly with 75% and 25% for training and testing, respectively. |
| Hardware Specification | No | The paper mentions that |
| Software Dependencies | Yes | All experiments were conducted using PyTorch [52] version 1.4.0. |
| Experiment Setup | Yes | We fix the subset of clients S = 5 for MNIST, and S = 10 for Synthetic. We compare the algorithms using both cases of the same and fine-tuned learning rates, batch sizes, and number of local and global iterations. ... We fix |D| = 20, R = 20, K = 5, and T = 800 for MNIST, and T = 600 for Synthetic, β = 2 for pFedMe (ˆα and ˆβ are learning rates of Per-FedAvg). |