Low Precision Local Training is Enough for Federated Learning
Authors: Zhiwei Li, Yiqiu LI, Binbin Lin, Zhongming Jin, Weizhong Zhang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments across extensive benchmarks are conducted to showcase the effectiveness of our proposed method. |
| Researcher Affiliation | Collaboration | 1Fudan University 2Zhejiang University 3Alibaba Cloud Computing 4Fullong Inc. |
| Pseudocode | Yes | The details are given in Algorithm 1. Our pseudocode in Algorithm 2 depicts the process of low precision local training on the client device and high precision aggregation on the server. |
| Open Source Code | Yes | Code is released at https://github.com/digbangbang/LPT-FL. |
| Open Datasets | Yes | We conduct experiments over four commonly used datasets: Fashion Mnist [34], CIFAR10 [19], CIFAR100 [19] and CINIC10 [8]. |
| Dataset Splits | Yes | In our experiment, we set the test dataset as the validation dataset. |
| Hardware Specification | Yes | All of our models are trained on s Ge Force RTX 4090. |
| Software Dependencies | No | The paper does not provide specific version numbers for the software dependencies (e.g., libraries, frameworks) used in the experiments. |
| Experiment Setup | Yes | For Fashion MNIST, CIFAR10, CINIC10 and CIFAR100, we run 200 communication rounds with local epoch set to 1. There are 80 clients in total, and the participation ratio in each round is set to 40%. We use Dirichlet distribution to simulate non-iid data distribution and set α to 0.01, 0.04, and 0.16. The local learning rate is set to 10-3 with Adam optimizer [17]. We report the last 5 round global model s average performance evaluated using the test split of the datasets. For quantization method, we adopt the Block FLoating Point Quantization with the number of bits used set to 6, 8 and 32 (without quantization). Some of the other hyperparameter settings are included in the Appendix C. |