FedNP: Towards Non-IID Federated Learning via Federated Neural Propagation
Authors: Xueyang Wu, Hengguan Huang, Youlong Ding, Hao Wang, Ye Wang, Qian Xu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on both image classification tasks with synthetic non-i.i.d image data partitions and real-world non-i.i.d speech recognition tasks demonstrate that our framework effectively alleviates the performance deterioration caused by non-i.i.d data. |
| Researcher Affiliation | Academia | 1Hong Kong University of Science and Technology, Hong Kong SAR, China 2National University of Singapore, Singapore 3Shenzhen University, Shenzhen, China 4Rutgers University, Piscataway, NJ, USA |
| Pseudocode | Yes | We present such an algorithm with a mild extension to the commonly used FL framework (Fed Avg) in Algorithm 1 (see Appendix B). |
| Open Source Code | Yes | Full appendix and codes can be found at https://github.com/CodePothunter/fednp. |
| Open Datasets | Yes | The datasets used in our experiments are public, and codes can be found in the supplementary file. (...) We conduct experiments on CIFAR100 (Krizhevsky, Nair, and Hinton 1995) and Tiny Imagenet (Le and Yang 2015). (...) We evaluate our proposed method on a challenging real conversational speech dataset CHi ME-5 (Barker et al. 2018) |
| Dataset Splits | No | The paper describes how non-i.i.d data partitions are generated among clients (e.g., using Dirichlet distribution or sessions), and mentions a "testing set" for CHiME-5, but it does not provide explicit train/validation/test dataset splits in terms of percentages, sample counts, or specific pre-defined split details for the main datasets in the main paper text. |
| Hardware Specification | No | The paper states that "We describe the experimental details, including the environments, implementations, etc., in Appendix D," but does not specify any hardware details like specific GPU or CPU models in the main text. |
| Software Dependencies | No | The paper mentions that "We describe the experimental details, including the environments, implementations, etc., in Appendix D," but does not provide specific software names with version numbers in the main text. |
| Experiment Setup | No | The paper mentions that "We describe the experimental details, including the environments, implementations, etc., in Appendix D," but does not provide specific hyperparameters, training configurations, or other system-level settings in the main text. |