FedGS: Federated Graph-Based Sampling with Arbitrary Client Availability

Authors: Zheng Wang, Xiaoliang Fan, Jianzhong Qi, Haibing Jin, Peizhen Yang, Siqi Shen, Cheng Wang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate the effectiveness of FEDGS, we conduct experiments on three datasets under a comprehensive set of seven client availability modes. Our experimental results confirm FEDGS s advantage in both enabling a fair client-sampling scheme and improving the model performance under arbitrary client availability.
Researcher Affiliation Academia 1Fujian Key Laboratory of Sensing and Computing for Smart Cities, School of Informatics, Xiamen University, Xiamen, China 2School of Computing and Information Systems, University of Melbourne, Melbourne, Australia
Pseudocode Yes Algorithm 1: Federated Graph-Based Sampling
Open Source Code Yes Our code is available at https://github.com/Ww Zzz/Fed GS.
Open Datasets Yes We validate FEDGS on three commonly used federated datasets: Synthetic (0.5, 0.5) (Li et al. 2020), CIFAR10 (Krizhevsky and Hinton 2009) and Fashion MNIST (Xiao, Rasul, and Vollgraf 2017).
Dataset Splits Yes For each dataset, we tune the hyper-parameters by grid search with Fed Avg, and we adopt the optimal parameters on the validation dataset of Fed Avg to all the methods.
Hardware Specification Yes All our experiments are run on a 64 GBRAM Ubuntu 18.04.6 server with Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz and 4 NVidia(R) 2080Ti GPUs.
Software Dependencies Yes All code is implemented in Py Torch 1.12.0.
Experiment Setup Yes The batch size is B = 10 for Synthetic and B = 32 for both CIFAR10 and Fashion MNIST. The optimal parameters for Synthetic, Cifar10, Fashion MNIST are resepctively η = 0.1, E = 10, E = 10, η = 0.03 and E = 10, η = 0.1. We round-wisely decay the learning srate by a factor of 0.998 for all the datasets.