pFedClub: Controllable Heterogeneous Model Aggregation for Personalized Federated Learning

Authors: Jiaqi Wang, Qi Li, Lingjuan Lyu, Fenglong Ma

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted extensive experiments across three datasets, examining both IID and non-IID settings. The results demonstrate that p Fed Club outperforms baseline approaches, achieving state-of-the-art performance.
Researcher Affiliation Collaboration Jiaqi Wang1 Qi Li2 Lingjuan Lyu3 Fenglong Ma1 1The Pennsylvania State University 2Iowa State University 3Sony AI
Pseudocode Yes Algorithm 1: The CMSR Algorithm
Open Source Code Yes The source code can be found at https://github.com/Jackqq Wang/24club.
Open Datasets Yes In our experiments, we utilize three commonly used datasets to validate the performance of the proposed p Fed Club, including MNIST4, SVHN5, and CIFAR-106. 4https://yann.lecun.com/exdb/mnist/ 5http://ufldl.stanford.edu/housenumbers/ 6https://www.cs.toronto.edu/~kriz/cifar.html
Dataset Splits Yes We randomly divide the datasets into three parts: 72% for training, 20% for testing, and 8% as the public dataset.
Hardware Specification Yes We run all the experiments on NVIDIA A100 with CUDA version 12.0 on a Ubuntu 20.04.6 LTS server.
Software Dependencies Yes All baselines and the proposed p Fed Club are implemented in Pytorch 2.0.1.
Experiment Setup Yes For the proposed p Fed Club and baseline p Fed HR, we set the number of clusters K = 4 following [23], and the local training epoch and the server finetuning epoch are equal to 10 and 3, respectively. The hyperparameter λ in Eq. (5) is 0.2. The hyperparameter τ in Eq. (3) is 0.07. We use Adam as the optimizer. The learning rate of the local client learning and the server fine-tuning learning rate equal 0.001.