FedSampling: A Better Sampling Strategy for Federated Learning
Authors: Tao Qi, Fangzhao Wu, Lingjuan Lyu, Yongfeng Huang, Xing Xie
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on four benchmark datasets show that Fed Sampling can effectively improve the performance of federated learning. |
| Researcher Affiliation | Collaboration | 1Department of Electronic Engineering, Tsinghua University, Beijing 100084, China 2Microsoft Research Asia, Beijing 100080, China 3Sony AI, 1-7-1 Konan Minato-ku Tokyo 108-0075, Japan 4Zhongguancun Laboratory, Beijing 100094, China 5 Institute for Precision Medicine of Tsinghua University, Beijing 102218, China |
| Pseudocode | Yes | Algorithm 1 Workflow of Fed Sampling |
| Open Source Code | Yes | More model details are in the Appendix and Codes (https://github.com/taoqi98/Fed Sampling). |
| Open Datasets | Yes | Experiments are conducted on four benchmark datasets: a text dataset MIND [Wu et al., 2020], two Amazon review datasets (i.e., Toys and Beauty) [Mc Auley et al., 2015], and an image dataset EMNIST [Cohen et al., 2017]. |
| Dataset Splits | No | While the paper mentions "All hyper-parameters are selected on the validation set", it does not provide explicit details about the size or percentage of this validation split for any of the datasets. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, memory). |
| Software Dependencies | No | The paper mentions models like Text-CNN, Transformer, and Res Net, but it does not specify the software libraries or their version numbers used for implementation, nor other ancillary software dependencies with versions. |
| Experiment Setup | Yes | In our Fed Sampling method, the size threshold M and privacy budget ϵ are set to 300 and 3 respectively. The learning rate η is set to 0.05, and the number K of samples for participating in a training round is set to 2048. |