RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets
Authors: Liping Li, Wei Xu, Tianyi Chen, Georgios B. Giannakis, Qing Ling1544-1551
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerically, experiments on real dataset corroborate the competitive performance of RSA and a complexity reduction compared to the state-of-the-art alternatives. |
| Researcher Affiliation | Academia | 1Department of Automation, University of Science and Technology of China, Hefei, Anhui, China 2Digital Technology Center, University of Minnesota, Twin Cities, Minneapolis, Minnesota, USA 3School of Data and Computer Science, Sun Yat-Sen University, Guangzhou, Guangdong, China |
| Pseudocode | Yes | Algorithm 1 Distributed SGD, Algorithm 2 RSA for Robust Distributed Learning |
| Open Source Code | No | The paper provides a link to the PDF of the paper itself in the references (http://home.ustc.edu.cn/~qingling/papers/C_AAAI2019_RSA.pdf), but no explicit statement or link for the source code of the described methodology. |
| Open Datasets | Yes | We conduct experiments on the MNIST dataset, which has 60k training samples and 10k testing samples |
| Dataset Splits | No | The paper mentions 60k training samples and 10k testing samples but does not specify a validation set or split. |
| Hardware Specification | Yes | We launch 20 worker processes and 1 master process on a computer with Intel i76700 CPU @ 3.40GHz. |
| Software Dependencies | No | The paper mentions machine learning algorithms like SGD and softmax regression, but does not provide specific version numbers for any software libraries, frameworks, or environments used. |
| Experiment Setup | Yes | ℓ1-norm RSA chooses the parameter λ = 0.1 and the step size αk = 0.003/√k. The total number of iterations for every algorithm is 5000. |