Fast Federated Learning in the Presence of Arbitrary Device Unavailability

Authors: Xinran Gu, Kaixuan Huang, Jingzhao Zhang, Longbo Huang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also provide an explicit characterization of the improvement over baseline algorithms through a case study, and validate the results by numerical experiments on real-world datasets.In this section, we conduct numerical experiments3. to verify our theoretical results and investigate how the heterogeneity of the device availability influences the Federated optimization algorithms.
Researcher Affiliation Academia Xinran Gu IIIS Tsinghua University gxr21@mails.tsinghua.edu.cn Kaixuan Huang ECE Princeton University kaixuanh@princeton.edu Jingzhao Zhang EECS Massachusetts Institute of Technology jzhzhang@mit.edu Longbo Huang IIIS Tsinghua University longbohuang@tsinghua.edu.cn
Pseudocode Yes Algorithm 1 Memory-augmented Impatient Federated Averaging (MIFA)
Open Source Code Yes Our code is available at https://github.com/hmgxr128/MIFA_code/
Open Datasets Yes Following [26, 25], we construct non-i.i.d. datasets from two commonly used computer vision datasets MNIST [23] and CIFAR-10 [22].
Dataset Splits No The paper uses commonly used datasets (MNIST, CIFAR-10) but does not explicitly detail training, validation, or test dataset splits.
Hardware Specification Yes We run all the experiments with 4 GPUs of type Ge Force RTX 2080 Ti.
Software Dependencies No The paper mentions adapting code from a previous work [26] but does not provide specific version numbers for software dependencies or libraries used for the experiments.
Experiment Setup Yes In all the experiments, we set the initial learning rate to be η0 = 0.1 and decay the learning rate as ηt = η0 1 t . We set the weight decay to be 0.001. The local batch size is 100 and each local update consists of 2 epochs.