Federated Composite Optimization

Authors: Honglin Yuan, Manzil Zaheer, Sashank Reddi

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our theoretical analysis and empirical experiments demonstrate that FEDDUALAVG outperforms the other baselines.
Researcher Affiliation Collaboration 1Stanford University 2Based on work performed at Google Research 3Google Research. Correspondence to: Honglin Yuan <yuanhl@stanford.edu>.
Pseudocode Yes Algorithm 1 Federated Averaging (FEDAVG) ... Algorithm 2 Federated Mirror Descent (FEDMID) ... Algorithm 3 Federated Dual Averaging (FEDDUALAVG)
Open Source Code Yes The source code is available at https://github.com/hongliny/FCO-ICML21.
Open Datasets Yes For the purpose of illustration, in Fig. 1, we present results on a federated sparse ( 1-regularized) logistic regression task for an f MRI dataset based on (Haxby, 2001).
Dataset Splits Yes We select five (out of six) subjects as the training set and the last subject as the held-out validation set.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory, or cloud instance types).
Software Dependencies No The paper mentions using 'the nilearn package (Abraham et al., 2014)' but does not provide specific version numbers for this or any other software dependency, which is required for reproducibility.
Experiment Setup Yes The best learning rates configuration is c = 0.01, s = 1 for FEDDUALAVG, and c = 0.001, s = 0.3 for other algorithms (including FEDMID). ... We set the 1-regularization strength to be 10 3. For each setup, we run the federated algorithms for 300 communication rounds.