Federated Learning with Fair Averaging

Authors: Zheng Wang, Xiaoliang Fan, Jianzhong Qi, Chenglu Wen, Cheng Wang, Rongshan Yu

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on a suite of federated datasets confirm that Fed FV compares favorably against state-of-the-art methods in terms of fairness, accuracy and efficiency.
Researcher Affiliation Academia Zheng Wang1 , Xiaoliang Fan1, , Jianzhong Qi2 , Chenglu Wen1 , Cheng Wang 1 and Rongshan Yu 1 1Fujian Key Laboratory of Sensing and Computing for Smart Cities, School of Informatics, Xiamen University, Xiamen, China 2School of Computing and Information Systems, University of Melbourne, Melbourne, Australia
Pseudocode Yes Algorithm 1 Fed FV Input:T, m, α, τ, η, θ0, pk, k = 1, ..., K, ... Algorithm 2 Mitigate Internal Conflict Input: POt, Gt, α ... Algorithm 3 Mitigate External Conflict Input: gt, GH, τ
Open Source Code Yes The source code is available at https://github.com/Ww Zzz/easy FL.
Open Datasets Yes We evaluate Fed FV on three public datasets: CIFAR-10 [Krizhevsky, 2012], Fashion MNIST [Xiao et al., 2017] and MNIST [Le Cun et al., 1998].
Dataset Splits Yes The local dataset is split into training and testing data with percentages of 80% and 20%.
Hardware Specification Yes All our experiments are implemented on a 64g-MEM Ubuntu 16.04.6 server with 40 Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz and 4 NVidia(R) 2080Ti GPUs.
Software Dependencies Yes All code is implemented in Py Torch version 1.3.1.
Experiment Setup Yes For all experiments, we fix the local epoch E = 1 and use batchsize BCIF AR 10|MNIST {full, 64}, BF ashion MNIST {full, 400} to run Stochastic Gradient Descent (SGD) on local datasets with stepsize η {0.01, 0.1}.