Towards Personalized Federated Learning via Heterogeneous Model Reassembly

Authors: Jiaqi Wang, Xingyi Yang, Suhan Cui, Liwei Che, Lingjuan Lyu, Dongkuan (DK) Xu, Fenglong Ma

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that p Fed HR outperforms baselines on three datasets under both IID and Non-IID settings. Additionally, p Fed HR effectively reduces the adverse impact of using different public data and dynamically generates diverse personalized models in an automated manner.
Researcher Affiliation Collaboration Jiaqi Wang1 Xingyi Yang2 Suhan Cui1 Liwei Che1 Lingjuan Lyu3 Dongkuan Xu4 Fenglong Ma1 1The Pennsylvania State University 2National University of Singapore 3Sony AI 4North Carolina State University
Pseudocode Yes Algorithm 1: Reassembly Candidate Search [...] Algorithm 1: Algorithm Flow of p Fed HR.
Open Source Code Yes Source code can be found in the link https://github.com/Jackqq Wang/pfed HR
Open Datasets Yes Datasets. We conduct experiments for the image classification task on MNIST, SVHN, and CIFAR10 datasets under both IID and non-IID data distribution settings, respectively. We split the datasets into 80% for training and 20% for testing.
Dataset Splits No The paper mentions 'We split the datasets into 80% for training and 20% for testing' but does not specify a separate validation split or how validation was performed if it was implicitly part of training.
Hardware Specification Yes The proposed p Fed HR is implemented in Pytorch 2.0.1 and runs on NVIDIA A100 with CUDA version 12.0 on a Ubuntu 20.04.6 LTS server.
Software Dependencies Yes The proposed p Fed HR is implemented in Pytorch 2.0.1 and runs on NVIDIA A100 with CUDA version 12.0 on a Ubuntu 20.04.6 LTS server.
Experiment Setup Yes The hyperparameter λ in Eq. (6) is 0.2. We use Adam as the optimizer. The learning rate of the local client learning and the server fine-tuning learning rate are both equal to 0.001.