SCAFFOLD: Stochastic Controlled Averaging for Federated Learning

Authors: Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, Ananda Theertha Suresh

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we confirm our theoretical results on simulated and real datasets (extended MNIST by Cohen et al. (2017)).
Researcher Affiliation Collaboration 1EPFL, Lausanne 2Based on work performed at Google Research, New York. 3Google Research, New York 4Courant Institute, New York.
Pseudocode Yes Algorithm 1 SCAFFOLD: Stochastic Controlled Averaging for federated learning
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes Our real world experiments run logistic regression (convex) and 2 layer fully connected network (non-convex) on the EMNIST (Cohen et al., 2017).
Dataset Splits No The paper describes how data is distributed among clients for heterogeneity, but it does not provide explicit details about training, validation, or test dataset splits (e.g., percentages or methodology).
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list any specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We always use global step-size ηg = 1 and tune the local step-size ηl individually for each algorithm. ... 1 epoch for local update methods corresponds to 5 local steps (0.2 batch size), and 20% of clients are sampled each round. We fix µ = 1 for FEDPROX...