QSFL: A Two-Level Uplink Communication Optimization Framework for Federated Learning

Authors: Liping Yi, Wang Gang, Liu Xiaoguang

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4. Experiments We implement QSFL3 on the FL framework developed in Luo et al. (2019) and use 4 NVIDIA Ge Force RTX 3090 GPUs to execute QSFL parallelly. We evaluate QSFL in an image classification task and an object detection task.
Researcher Affiliation Collaboration 1Nankai-Orange D.T. Joint Lab, College of Computer Science, Nankai University, Tianjin, China. Correspondence to: Gang Wang <wgzwp@nbjl.nankai.edu.cn>, Xiaoguang Liu <liuxg@nbjl.nankai.edu.cn>.
Pseudocode Yes Algorithm 1 SCSS Algorithm
Open Source Code Yes QSFL3 on the FL framework developed in Luo et al. (2019)... 3https://github.com/LipingYi/QSFL
Open Datasets Yes CNN on FEMINIST: we train a CNN network (2Conv + 1FC) with 110526 parameters on a real-world FEMINIST4 dataset (Caldas et al., 2018b). ... 4https://github.com/TalwalkarLab/leaf/tree/master/data/FEMINIST
Dataset Splits No Each client s local dataset is divided into training/testing sets with a ratio of 8:2. ... No explicit mention of a validation split for either dataset.
Hardware Specification Yes use 4 NVIDIA Ge Force RTX 3090 GPUs to execute QSFL parallelly.
Software Dependencies No The paper mentions implementing QSFL on an FL framework and provides a GitHub link, but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes We also report detailed hyperparameters settings of FL in the two tasks, as shown in Tab. 7. Table 7: Hyperparameters settings of FL. C: total number of clients, η: learning rate; E: epoch, B: batch size. CNN on FEMINIST C 36 η 0.01 E 10 B 1