Personalized Cross-Silo Federated Learning on Non-IID Data

Authors: Yutao Huang, Lingyang Chu, Zirui Zhou, Lanjun Wang, Jiangchuan Liu, Jian Pei, Yong Zhang7865-7873

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments on benchmark data sets demonstrate the superior performance of the proposed methods. In this section, we evaluate the performance of Fed AMP and Heur Fed AMP and compare them with the state-ofthe-art personalized federated learning algorithms, including SCAFFOLD (Karimireddy et al. 2019), APFL (Deng, Kamani, and Mahdavi 2020), Fed Avg-FT and Fed Prox FT (Wang et al. 2019).
Researcher Affiliation Collaboration 1Simon Fraser University, Burnaby, Canada 2Mc Master University, Hamilton, Canada 3Huawei Technologies Canada, Burnaby, Canada
Pseudocode Yes Algorithm 1: Fed AMP
Open Source Code No The API of this work is available at https://t.ly/n GN9, free registration at Huawei Cloud is required before use. This provides access to an API, not the underlying source code for the methodology, and requires registration, which does not meet the criteria for concrete open-source code access.
Open Datasets Yes We use four public benchmark data sets, MNIST (Le Cun, Cortes, and Burges 2010), FMNIST (Fashion MNIST) (Xiao, Rasul, and Vollgraf 2017), EMNIST (Extended-MNIST) (Cohen et al. 2017) and CIFAR100 (Krizhevsky and Hinton 2009).
Dataset Splits No The paper describes how data is partitioned among clients and mentions 'Every client has 100 testing samples with the same distribution as its training data,' implying a training and testing split. However, it does not provide explicit details on a separate validation split or the specific percentages/counts for training and testing across all datasets, only a general distribution rule.
Hardware Specification Yes All the methods are implemented in Py Torch 1.3 running on Dell Alienware with Intel(R) Core(TM) i9-9980XE CPU, 128G memory, NVIDIA 1080Ti, and Ubuntu 16.04.
Software Dependencies Yes All the methods are implemented in Py Torch 1.3 running on Dell Alienware with Intel(R) Core(TM) i9-9980XE CPU, 128G memory, NVIDIA 1080Ti, and Ubuntu 16.04.
Experiment Setup No The paper states, 'Please see Appendix C (Huang et al. 2020) for the details of the practical non-IID data setting on MNIST, FMNIST and CIFAR100, the implementation details and the hyperparameter settings of all the methods,' indicating these details are not in the main body of the paper.