On the Role of Server Momentum in Federated Learning

Authors: Jianhui Sun, Xidong Wu, Heng Huang, Aidong Zhang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments validate the effectiveness of our proposed framework. and 6 Experimental Results In this section, we present empirical evidence to verify our theoretical findings. We train Res Net (He et al. 2016) and VGG (Simonyan and Zisserman 2015) on CIFAR10 (Krizhevsky 2009).
Researcher Affiliation Academia 1Computer Science, University of Virginia, VA, USA 2Electrical and Computer Engineering, University of Pittsburgh, PA, USA 3Computer Science, University of Maryland College Park, MD, USA
Pseudocode Yes Algorithm 1: Fed OPT (Reddi et al. 2020): A Generic Formulation of Federated Optimization and Algorithm 3: Multistage Fed GM
Open Source Code No No explicit statement or link providing access to the open-source code for the methodology described in this paper.
Open Datasets Yes We train Res Net (He et al. 2016) and VGG (Simonyan and Zisserman 2015) on CIFAR10 (Krizhevsky 2009).
Dataset Splits No The paper does not explicitly mention train/validation/test dataset splits or cross-validation setup.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory amounts, or cloud instance types) are provided for the experimental setup.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA) are listed in the paper.
Experiment Setup Yes Unless specified otherwise, we have 100 clients in all experiments, and the partial participation ratio is 0.05, i.e., 5 out of 100 clients are picked in each round, non-i.i.d. is α = 0.5, and local epoch is 3. and We perform grid search over η {0.5, 1.0, 1.5, . . . , 5.0}, β {0.7, 0.9, 0.95}, and ν {0.7, 0.9, 0.95}.