Understanding Server-Assisted Federated Learning in the Presence of Incomplete Client Participation

Authors: Haibo Yang, Peiwen Qiu, Prashant Khanduri, Minghong Fang, Jia Liu

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on different datasets show SAFARI significantly improves the performance under incomplete client participation.
Researcher Affiliation Academia 1Department of Computing and Information Sciences Ph.D., Rochester Institute of Technology, Rochester, NY, USA 2Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH, USA 3Department of Computer Science, Wayne State University, Detroit, MI, USA 4Department of Computer Science and Engineering, University of Louisville, Louisville, KY, USA.
Pseudocode Yes As shown in Algorithm 1, SAFARI contains two options in each round, client update option or global server update option.
Open Source Code No The paper mentions relegating experimental details and results to supplementary material but provides no explicit statement or link for the availability of the source code.
Open Datasets Yes 1) logistic regression (LR) on MNIST dataset (Le Cun et al., 1998), 2) convolutional neural network (CNN) on CIFAR-10 dataset (Krizhevsky et al., 2009).
Dataset Splits No The paper describes how data is distributed among clients and mentions training, but it does not provide explicit details about a separate validation dataset or its split percentages/counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch versions) for reproducibility.
Experiment Setup Yes For both MNIST and CIFAR-10, the global learning learning rate 1.0, the local learning rate is 0.1, and the server learning rate for SAFARI is 0.1. The local epoch is 1. For MNIST, the batch size is 64, and the total communication round is 150. For CIFAR-10, the batch size is 500, and the total communication round is 5000.