Convergence Analysis of Sequential Federated Learning on Heterogeneous Data

Authors: Yipeng Li, Xinchen Lyu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results validate the counterintuitive analysis result that SFL outperforms PFL on extremely heterogeneous data in cross-device settings.
Researcher Affiliation Academia Yipeng Li and Xinchen Lyu National Engineering Research Center for Mobile Network Technologies Beijing University of Posts and Telecommunications Beijing, 100876, China {liyipeng, lvxinchen}@bupt.edu.cn
Pseudocode Yes Algorithm 1: Sequential FL; Algorithm 2: Parallel FL
Open Source Code Yes Our code is partly from Gao et al. (2021); Zeng et al. (2021); Jhunjhunwala et al. (2023) (more references are included in the code), and it is available at https://github.com/liyipeng00/convergence.
Open Datasets Yes We partition the training sets of CIFAR-10 (Krizhevsky et al., 2009) and CINIC-10 (Darlow et al., 2018).
Dataset Splits No The paper describes partitioning 'training sets' and sparing 'test sets' but does not explicitly mention or detail a validation set or its split.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments were mentioned in the paper.
Software Dependencies No The paper mentions using VGGs and Res Nets models, and SGD as a local solver, but does not provide specific version numbers for software frameworks, libraries, or dependencies.
Experiment Setup Yes We fix the number of participating clients to 10 and the mini-batch size to 20. The local solver is SGD with learning rate being constant, momentem being 0 and weight decay being 1e-4. We apply gradient clipping to both algorithms (Appendix G.2) and tune the learning rate by grid search (Appendix G.3).