Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning
Authors: Sen Cui, Weishen Pan, Jian Liang, Changshui Zhang, Fei Wang
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments on synthetic and real-world datasets demonstrate the superiority that our approach over baselines and its effectiveness in achieving both fairness and consistency across all local clients. |
| Researcher Affiliation | Collaboration | Sen Cui1 Weishen Pan1 Jian Liang2 Changshui Zhang1 Fei Wang3 1Institute for Artificial Intelligence, Tsinghua University (THUAI)... 2 Alibaba Group, China 3 Department of Population Health Sciences, Weill Cornell Medicine, USA |
| Pseudocode | Yes | Algorithm 1 in Appendix shows all steps of our method. |
| Open Source Code | Yes | The source codes of FCFL are made publicly available at https://github.com/cuis15/FCFL. |
| Open Datasets | Yes | Synthetic dataset: following the setting in [30, 23], the synthetic data is from two given non-convex objectives; (2) UCI Adult dataset [5]: Adult contains more than 40000 adult records... (3)e ICU dataset: We select [31], a clinical dataset collecting patients about their admissions to ICUs with hospital information. ... (a) If your work uses existing assets, did you cite the creators? [Yes] We cite the creators and discuss it in Appendix |
| Dataset Splits | Yes | We split dataset into 80% training data, 10% validation data, and 10% testing data. |
| Hardware Specification | Yes | All experiments were run on a server with an NVIDIA RTX 3090 GPU and an Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz. |
| Software Dependencies | No | The paper states 'We implemented our method and baselines in PyTorch' but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | For the optimization, we set the learning rate as 0.001 and use Adam optimizer for all methods. The batch size is 64. The maximum number of epochs is 100. We set δl and δg from 10 to 0.001 with decay rate β = 0.999. |