Personalized Federated Learning under Mixture of Distributions
Authors: Yue Wu, Shuaicheng Zhang, Wenchao Yu, Yanchi Liu, Quanquan Gu, Dawei Zhou, Haifeng Chen, Wei Cheng
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science, University of California, Los Angeles, USA. 2Department of Computer Science, Virginia Tech, Blacksburg, USA. 3NEC Laboratories America, Princeton, USA. |
| Pseudocode | Yes | Algorithm 1 Algorithm of Fed GMM and Algorithm 2 Federated GMM (Unsupervised) in Appendix A. |
| Open Source Code | Yes | More implementation2 details are included in Appendix C.1. 2https://github.com/zshuai8/Fed GMM ICML2023 |
| Open Datasets | Yes | Real datasets. We also use three federated benchmark datasets spanning different machine learning tasks to evaluate the proposed approach: image classification on CIFAR10 and CIFAR-100 (Krizhevsky et al., 2009), handwriting character recognition on FEMNIST (Caldas et al., 2018a). |
| Dataset Splits | Yes | For all tasks, we randomly split each local dataset into training (60%), validation (20%), and test (20%) sets. |
| Hardware Specification | Yes | In this paper, we implemented our method on a Linux machine with 8 NVIDIA A100 GPUs, each with 80GB of memory. |
| Software Dependencies | Yes | The software environment is CUDA 11.6 and Driver Version 520.61.05. We used Python 3.9.13 and Pytorch 1.12.1 (Paszke et al., 2019) to construct our project. |
| Experiment Setup | Yes | In our experiments, the number of local epochs of each method is set to 1, the total communication round is set to 200, and the batch size is set to 128, as in (Marfoq et al., 2021). For our proposed Fed GMM, the learning rate is set to 0.01 on CIFA-R10, 0.001 on CIFAR-100 and FEMNIST. M1 and M2 of Fed GMM are tuned via grid search. For our method M1=3 and M2=3. |