H-FL: A Hierarchical Communication-Efficient and Privacy-Protected Architecture for Federated Learning
Authors: He Yang
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our H-FL framework achieves the state-of-art performance on different datasets for the real-world image recognition tasks. We evaluate H-FL on different datasets and compare the performance to Fed AVG [Mc Mahan et al., 2017a], STC [Sattler et al., 2019] and DGC [Lin et al., 2018] in non-IID environments. |
| Researcher Affiliation | Academia | Xi an Jiaotong University sleepingcat@stu.xjtu.edu.cn |
| Pseudocode | Yes | Algorithm 1 Runtime distribution reconstruction; Algorithm 2 The workflow for H-FL |
| Open Source Code | No | The paper does not provide any statement or link regarding the availability of its source code. |
| Open Datasets | Yes | Specifically, we have trained a modified version of Le Net5 [Le Cun et al., 1998] network on FMNIST [Xiao et al., 2017] and a modified VGG16 [Simonyan and Zisserman, 2014] network network on CIFAR10 [Krizhevsky et al., 2009] respectively. |
| Dataset Splits | No | The paper does not provide specific train/validation/test dataset splits (percentages or sample counts). It only mentions using FMNIST and CIFAR10 in non-IID environments. |
| Hardware Specification | No | The paper does not mention any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The experiment settings are listed in Table 2. Dataset Clients Mediators η classes per client I L CIFAR10 100 3 0.015 3 10 1 FMNIST 100 3 0.015 2 10 1 |