Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates
Authors: Dong Yin, Yudong Chen, Ramchandran Kannan, Peter Bartlett
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments to show the effectiveness of the median and trimmed mean operations. Our experiments are implemented with Tensorflow (Abadi et al., 2016) on Microsoft Azure system. We use the MNIST (Le Cun et al., 1998) dataset and randomly partition the 60,000 training data into m subsamples with equal sizes. |
| Researcher Affiliation | Academia | 1Department of EECS, UC Berkeley 2School of ORIE, Cornell University 3Department of Statistics, UC Berkeley. |
| Pseudocode | Yes | Algorithm 1 Robust Distributed Gradient Descent |
| Open Source Code | No | The paper does not explicitly state that the source code for their methodology is open-source, nor does it provide a link to a code repository. It mentions using 'Tensorflow' but this refers to a third-party tool. |
| Open Datasets | Yes | We use the MNIST (Le Cun et al., 1998) dataset and randomly partition the 60,000 training data into m subsamples with equal sizes. |
| Dataset Splits | No | The paper mentions partitioning '60,000 training data into m subsamples' and refers to 'test error' and 'test accuracy', but it does not specify explicit training/validation/test dataset splits (e.g., percentages, counts, or cross-validation setup). |
| Hardware Specification | No | The paper states 'Our experiments are implemented with Tensorflow (Abadi et al., 2016) on Microsoft Azure system.' However, it does not provide specific hardware details such as exact GPU or CPU models, processor types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper mentions 'Tensorflow (Abadi et al., 2016)' as an implementation tool, but it does not provide specific version numbers for Tensorflow or any other software dependencies needed to replicate the experiments. |
| Experiment Setup | Yes | For logistic regression, we set m = 40, and for trimmed mean, we choose β = 0.05; for CNN, we set m = 10, and for trimmed mean, we choose β = 0.1. |