Multi-Level Branched Regularization for Federated Learning
Authors: Jinkyu Kim, Geeho Kim, Bohyung Han
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform comprehensive empirical studies and demonstrate remarkable performance gains in terms of accuracy and efficiency compared to existing methods. |
| Researcher Affiliation | Academia | 1Computer Vision Laboratory, Department of Electrical and Computer Engineering & ASRI, Seoul National University, Korea 2Interdisciplinary Program of Artificial Intelligence, Seoul National University, Korea. |
| Pseudocode | Yes | Algorithm 1 Fed MLB |
| Open Source Code | Yes | The source code is available in our project page1. 1http://cvlab.snu.ac.kr/research/Fed MLB |
| Open Datasets | Yes | We conduct a set of experiments on the CIFAR-100 and Tiny-Image Net (Le & Yang, 2015) datasets. |
| Dataset Splits | No | The paper explicitly states the use of a 'whole test set' for evaluation and describes data distribution for training (non-iid data partitioned across clients), but it does not explicitly mention a separate validation dataset split or its specific percentages/counts. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running experiments. |
| Software Dependencies | No | We use Py Torch (Paszke et al., 2019) to implement the proposed method and other baselines. |
| Experiment Setup | Yes | The number of local training epochs is set to 5, and the batch size is determined to make the total number of iterations for local updates 50 for all experiments unless specified otherwise. The global learning rate is 1 for all methods except for Fed ADAM with 0.01. We list the details of the hyperparameters specific to Fed MLB and the baseline algorithms in Appendix A. |