Byzantine-Resilient High-Dimensional SGD with Local Iterations on Heterogeneous Data

Authors: Deepesh Data, Suhas Diggavi

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We corroborate our theoretical results with preliminary experiments for neural network training. In this section, we present preliminary numerical results on a non-convex objective. Additional implementation details can be found in Appendix F in the supplementary material.
Researcher Affiliation Academia 1University of California, Los Angeles, USA. Correspondence to: Deepesh Data <deepesh.data@gmail.com>.
Pseudocode Yes Algorithm 1 Byzantine-Resilient SGD with Local Iterations; Algorithm 2 Robust Accumulated Gradient Estimation (RAGE)
Open Source Code No The paper mentions implementation details in Appendix F, but does not state that source code for the described methodology is being released or provide a link to it.
Open Datasets Yes We train a single layer neural network for image classification on the MNIST handwritten digit (from 0-9) dataset.
Dataset Splits No The MNIST dataset has 60,000 training images (with 6000 images of each label) and 10,000 test images (each having 28x28 = 784 pixels)...
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes All clients compute stochastic gradients on a batch-size of 128 in each iteration and communicate the local parameter vectors with the server after taking H = 7 local iterations. For all the defense mechanisms, we start with a step-size = 0.08 and decrease its learning rate by a factor of 0.96 when the difference in the corresponding test accuracies in the last 2 consecutive epochs is less than 0.001.