Accelerated Federated Learning with Decoupled Adaptive Optimization

Authors: Jiayin Jin, Jiaxiang Ren, Yang Zhou, Lingjuan Lyu, Ji Liu, Dejing Dou

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluation on real federated tasks and datasets demonstrates the superior performance of our momentum decoupling adaptive optimization model against several state-of-the-art regular federated learning and federated optimization approaches.
Researcher Affiliation Collaboration 1 Auburn University, USA 2 Sony AI, Japan 3 Baidu Research, China 4 University of Oregon, USA.
Pseudocode Yes Algorithm 1 Fed DA+SGDM; Algorithm 2 Fed DA+ADAM& Fed DA+Ada Grad
Open Source Code No We promise to release our open-source codes on Git Hub and maintain a project website with detailed documentation for long-term access by other researchers and end-users after the paper is accepted.
Open Datasets Yes Datasets. We focus on three popular computer vision and natural language processing tasks over three representative benchmark datasets respectively: (1) image classification over CIFAR-100 (Krizhevsky, 2009). ... (2) image classification over EMNIST (Hsieh et al., 2020). ... and (3) text classification over Stack Overflow (Tensor Flow, 2019).
Dataset Splits No The paper describes how data is partitioned among clients and how hyperparameters are tuned based on training loss, but it does not specify explicit training/validation/test splits (e.g., 80/10/10) for the datasets used. It also mentions validation data for hyperparameter tuning is 'often inaccessible in federated settings'.
Hardware Specification Yes Our experiments were conducted on a compute server running on Red Hat Enterprise Linux 7.2 with 2 CPUs of Intel Xeon E5-2650 v4 (at 2.66 GHz) and 8 GPUs of NVIDIA Ge Force GTX 2080 Ti (with 11GB of GDDR6 on a 352-bit memory bus and memory bandwidth in the neighborhood of 620GB/s), 256GB of RAM, and 1TB of HDD.
Software Dependencies No All the codes were implemented based on the Tensorflow Federated (TFF) package (Ingerman & Ostrowski, 2019).
Experiment Setup Yes Unless otherwise explicitly stated, we used the following default parameter settings in the experiments, as shown in Table 12.