Hierarchical Channel-spatial Encoding for Communication-efficient Collaborative Learning
Authors: Qihua ZHOU, Song Guo, YI LIU, Jie ZHANG, Jiewei Zhang, Tao GUO, Zhenda XU, Xun Liu, Zhihao Qu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that SGQ achieves a higher traffic reduction ratio by up to 15.97 and provides 9.22 image processing speedup over the uniform quantized training, while preserving adequate model accuracy as FP32 does, even using 4-bit quantization. |
| Researcher Affiliation | Academia | Qihua Zhou1, Song Guo1 , Yi Liu1, Jie Zhang1, Jiewei Zhang1, Tao Guo1, Zhenda Xu1, Xun Liu1, Zhihao Qu2 1Department of Computing, The Hong Kong Polytechnic University 2School of Computer and Information, Hohai University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | No | Source codes will be shared at Github after the double-blind review. |
| Open Datasets | Yes | Our benchmarks are image classification tasks based on the training of Alex Net [18], VGG-11 [28], Res Net-18/34 [9], Shuffle Net-V2-1.0x/0.5x [21], and Mobile Net-V1 [10], with the CIFAR-10/100 (CF10/100) [17], Fashion MNIST (FM) [36] and mini-Image Net (MI) [33] datasets. |
| Dataset Splits | No | The paper states that experimental settings are discussed in 4.1, but section 4.1 only mentions batch sizes and optimizers, not specific train/validation/test dataset splits or their percentages/counts. |
| Hardware Specification | Yes | To match the edge environment, we evaluate SGQ on two types of devices: (1) NVIDIA Jetson Nano series [24], and (2) HUAWEI Atlas 200DK [13], both of which are connected to the NVIDIA RTX 2080Ti server through 10Gb E network. |
| Software Dependencies | Yes | All of these benchmarks are implemented via Py Torch-1.7.1 [25]. |
| Experiment Setup | Yes | As to MI, the batch size is 32 with the SGD optimizer. As to CF and FM, the batch size is 100 with the Adam [16] optimizer. |