FedABC: Targeting Fair Competition in Personalized Federated Learning

Authors: Dui Wang, Li Shen, Yong Luo, Han Hu, Kehua Su, Yonggang Wen, Dacheng Tao

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on two popular datasets under different settings, and the results demonstrate that our Fed ABC can significantly outperform the existing counterparts.
Researcher Affiliation Collaboration 1 National Engineering Research Center for Multimedia Software, School of Computer Science, Institute of Artificial Intelligence and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, China, 2 Hubei Luojia Laboratory, Wuhan, China, 3 JD Explore Academy, China, 4 School of Information and Electronics, Beijing Institute of Technology, China, 5 School of Computer Science and Engineering, Nanyang Technological University, Singapore
Pseudocode Yes Algorithm 1: Federated Averaging via Binary classification
Open Source Code No The paper does not provide any concrete access to source code, such as a repository link or an explicit statement of code release.
Open Datasets Yes We use MNIST (Lecun and Bottou 1998) and CIFAR-10 (Krizhevsky and Hinton 2009) as benchmarks. To simulate the heterogeneous federated learning scenario, we follow the previous works (Yurochkin et al. 2019; Wang et al. 2020) that utilize Dirichlet distribution Dir(α) to partition the training dataset and generate the corresponding test data for each client following the same distribution
Dataset Splits No The paper mentions partitioning the training dataset and generating test data but does not explicitly describe a separate validation split, its size, or how it was formed.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using SGD optimizer but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Our method has four hyper-parameters: mp,mn,mnn and σ. ... For CIFAR-10, we set them as 0.85, 0.2, 0.3, and 2, respectively. For MNIST, we set them as 0.75, 0.25, 0.3, and 2, respectively. ... We use the SGD optimizer with weight decay 1e 5 and a 0.9 momentum and the bath size is 64. For MNIST, the learning rate is 0.01. For CIFAR-10, the learning rate is 0.1. We train every method for 100 rounds and 200 rounds on MNIST and CIFAR-10, respectively. For the federated framework setting, the participation rate of clients is set as 0.5, which means that random 10 clients will be activated in each communication round. The local training epochs are set as 5 for all the experiments.