Hierarchical Federated Learning with Multi-Timescale Gradient Correction
Authors: Wenzhi Fang, Dong-Jun Han, Evan Chen, Shiqiang Wang, Christopher Brinton
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on various datasets and models, we validate the effectiveness of MTGC in diverse HFL settings. |
| Researcher Affiliation | Collaboration | Wenzhi Fang Purdue University fang375@purdue.edu Dong-Jun Han Yonsei University djh@yonsei.ac.kr Evan Chen Purdue University chen4388@purdue.edu Shiqiang Wang IBM Research wangshiq@us.ibm.com Christopher G. Brinton Purdue University cgb@purdue.edu |
| Pseudocode | Yes | Algorithm 1: HFL with Multi-Timescale Gradient Correction (MTGC) |
| Open Source Code | Yes | The code for this project is available at https://github.com/wenzhifang/MTGC. |
| Open Datasets | Yes | In our experiments, we consider four widely used datasets: EMNIST-Letters (EMNIST-L) [7], Fashion-MNIST [53], CIFAR-10 [23], and CIFAR-100 [23]. |
| Dataset Splits | Yes | The CINIC-10 dataset contains 90,000 training images, 90,000 validation images, and 90,000 test images, significantly larger than CIFAR-10 and CIFAR-100 with 60,000 images. |
| Hardware Specification | Yes | We conduct the experiments based on a cluster of 3 NVIDIA A100 GPUs with 40 GB memory. |
| Software Dependencies | No | The paper mentions "Our code is based on the framework of [1]" but does not specify particular software dependencies with version numbers (e.g., Python version, specific library versions like PyTorch, TensorFlow, etc.). |
| Experiment Setup | Yes | Across all algorithms considered, we maintain a consistent learning rate η = 0.1 and batch size 50. |