Personalized Federated Learning towards Communication Efficiency, Robustness and Fairness
Authors: Shiyun Lin, Yuze Han, Xiang Li, Zhihua Zhang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct a large number of experiments to show the empirical superiority of our method over several state-of-the-art methods on the three aspects. |
| Researcher Affiliation | Academia | Shiyun Lin1,2 Yuze Han2 Xiang Li2 Zhihua Zhang1,2 1Center for Statistical Science, Peking University 2School of Mathematical Sciences, Peking University shiyunlin@stu.pku.edu.cn hanyuze97@pku.edu.cn lx10077@pku.edu.cn zhzhang@math.pku.edu.cn |
| Pseudocode | Yes | Algorithm 1 lp-proj: Projection-based Lp Regularized Personalized Federated Learning |
| Open Source Code | Yes | Source code for the reproduction of numerical results is available at https://github.com/desternylin/perfed. |
| Open Datasets | Yes | We test lp-proj as well as other comparable algorithms on six datasets from common ML and FL benchmarks [50, 8]. |
| Dataset Splits | Yes | For each client, the training and testing data are pre-specified as in the ML community, and 20% of training data is randomly extracted to construct a validation set, keeping the remaining 80% as the training set. |
| Hardware Specification | Yes | All experiments are conducted on an NVIDIA RTX 3090 GPU. |
| Software Dependencies | Yes | The experiments are implemented with Python 3.8.13 and PyTorch 1.11.0. |
| Experiment Setup | Yes | More details about hyperparameter tuning are provided in Appendix C.2. |