DELTA: Diverse Client Sampling for Fasting Federated Learning
Authors: Lin Wang, Yongxin Guo, Tao Lin, Xiaoying Tang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the efficiency of the theoretical algorithm Fed DELTA and the practical algorithm Fed Prac DELTA on various datasets. Our results are validated through experiments on both synthetic and real-world datasets. |
| Researcher Affiliation | Academia | 1School of Science and Engineering, The Chinese University of Hong Kong (Shenzhen) 2The Shenzhen Institute of Artificial Intelligence and Robotics for Society 3The Guangdong Provincial Key Laboratory of Future Networks of Intelligence 4Research Center for Industries of the Future, Westlake University 5School of Engineering, Westlake University. |
| Pseudocode | Yes | Algorithm 1 Fed DELTA and Fed Prac DELTA : Federated learning with unbiased diverse sampling |
| Open Source Code | Yes | Our code is available at https://github.com/L3030/DELTA_FL. |
| Open Datasets | Yes | Datasets. (1) We evaluate Fed DELTA on synthetic data and split-Fashion MNIST. (2) We evaluate Fed Prac DELTA on non-iid Fashion MNIST, CIFAR-10 and LEAF [3]. |
| Dataset Splits | No | The paper describes data partitioning for clients (non-iid settings, split-Fashion MNIST) but does not provide explicit train/validation/test dataset splits in terms of percentages or sample counts for the overall dataset. |
| Hardware Specification | Yes | For all experiments, we use NVIDIA GeForce RTX 3090 GPUs. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies such as programming languages, libraries, or frameworks. |
| Experiment Setup | Yes | All the algorithms run in the same environment with a fixed learning rate of 0.001. We train each experiment for 2000 rounds to ensure that the global loss has a stable convergence performance. |