Taming Fat-Tailed (“Heavier-Tailed” with Potentially Infinite Variance) Noise in Federated Learning
Authors: Haibo Yang, Peiwen Qiu, Jia Liu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In addition to theoretical analysis, we also conduct extensive numerical experiments to study the fattailed phenomenon in FL systems and verify the efficacy of our proposed FAT-Clipping algorithms for FL systems with fat-tailed noise. |
| Researcher Affiliation | Academia | Haibo Yang Dept. of ECE The Ohio State University Columbus, OH 43210 yang.5952@osu.edu Peiwen Qiu Dept. of ECE The Ohio State University Columbus, OH 43210 qiu.617@osu.edu Jia Liu Dept. of ECE The Ohio State University Columbus, OH 43210 liu@ece.osu.edu |
| Pseudocode | Yes | Algorithm 1 Generalized Fed Avg Algorithm (GFed Avg). Algorithm 2 The FAT-Clipping-PR Algorithm. Algorithm 3 The FAT-Clipping-PI Algorithm. |
| Open Source Code | No | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] |
| Open Datasets | Yes | In this section, we conduct numerical experiments to verify the theoretical findings in Section 4 using 1) a synthetic function, 2) a convolutional neural network (CNN) with two convolutional layers on CIFAR-10 dataset [43], and 3) RNN on Shakespeare dataset. |
| Dataset Splits | No | The paper describes data distribution across clients (i.i.d. vs. non-i.i.d.) and mentions using standard procedures, but does not explicitly provide specific train/validation/test dataset splits (e.g., percentages or sample counts) in the main text. |
| Hardware Specification | No | The paper does not explicitly describe any specific hardware used for its experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software or libraries used in the experiments. |
| Experiment Setup | No | The paper mentions general experimental setup aspects like the number of clients and data heterogeneity parameter 'p', but it does not provide specific hyperparameter values such as learning rates, batch sizes, or optimizer settings used in the numerical experiments. |