Few-Round Learning for Federated Learning

Authors: Younghyun Park, Dong-Jun Han, Do-Yeon Kim, Jun Seo, Jaekyun Moon

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results show that our method generalizes well for arbitrary groups of clients and provides large performance improvements given the same overall communication/computation resources, compared to other baselines relying on known pretraining methods.
Researcher Affiliation Academia Younghyun Park dnffkf369@kaist.ac.kr Dong-Jun Han djhan93@kaist.ac.kr Do-Yeon Kim dy.kim@kaist.ac.kr Jun Seo tjwns0630@kaist.ac.kr Jaekyun Moon jmoon@kaist.edu School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST)
Pseudocode Yes Algorithm 1 Proposed Meta-Training Algorithm for Few-Round Learning
Open Source Code No The paper does not provide an explicit statement or link for open-source code availability.
Open Datasets Yes We validate our algorithm on CIFAR-100 [10], mini Image Net [19], FEMNIST[2].
Dataset Splits Yes Following the data splits in [14], for CIFAR-100 and mini Image Net, 100 classes are divided into 64 train, 16 validation and 20 test classes.
Hardware Specification Yes All methods are implemented using Pytorch and trained with a single Ge Force RTX 2080 Ti.
Software Dependencies No The paper mentions 'Pytorch' but does not specify a version number or other software dependencies with version information.
Experiment Setup Yes We adopt the SGD optimizer with a learning rate of β = 0.001 for the meta-learner and a learning rate of α = 0.0001 for the learner. We set the mini-batch size to 60 and the number of local epochs at each client to E = 1.