Personalized Federated Learning via Variational Bayesian Inference
Authors: Xu Zhang, Yinchuan Li, Wenpeng Li, Kaiyang Guo, Yunfeng Shao
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that the proposed method outperforms other advanced personalized methods on personalized models, e.g., p Fed Bayes respectively outperforms other SOTA algorithms by 1.25%, 0.42% and 11.71% on MNIST, FMNIST and CIFAR-10 under non-i.i.d. limited data. |
| Researcher Affiliation | Collaboration | 1LSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China 2Noah s Ark Lab, Huawei, Beijing, China. |
| Pseudocode | Yes | Algorithm 1 p Fed Bayes: Personalized Federated Learning via Bayesian Inference Algorithm |
| Open Source Code | No | The paper does not provide any specific links or explicit statements about the release of source code. |
| Open Datasets | Yes | We generate the non-i.i.d. datasets based on three public benchmark datasets, MNIST (Le Cun et al., 2010; 1998), FMNIST (Fashion MNIST)(Xiao et al., 2017) and CIFAR-10 (Krizhevsky, 2009). |
| Dataset Splits | Yes | For small, medium and large datasets of MNIST/FMNIST, there were 50, 200, 900 training samples and 950, 800, 300 test samples for each class, respectively. For the small, medium and large datasets of CIFAR-10, there were 25, 100, 450 training samples and 475, 400, 150 test samples in each class, respectively. |
| Hardware Specification | Yes | We did all experiments in this paper using servers with two GPUs (NVIDIA Tesla P100 with 16GB memory), two CPUs (each with 22 cores, Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz), and 192 GB memory. |
| Software Dependencies | No | The paper mentions 'We use Py Torch (Paszke et al., 2019) for all experiments.', but it does not specify a version number for PyTorch or any other software dependency. |
| Experiment Setup | Yes | Based on the experimental results, we set the learning rate of Fed Avg and Per-Fed Avg to 0.01. The learning rate and regularization weight of Fedprox are respectively set as 0.01 and λ = 0.001. ... For the proposed p Fed Bayes, we set the initialization of weight parameters ρ = 2.5, the tradeoff parameter ζ = 10, and the learning rates of the personalized model and global model η1 = η2 = 0.001. |