A Little Is Enough: Circumventing Defenses For Distributed Learning

Authors: Gilad Baruch, Moran Baruch, Yoav Goldberg

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide two kinds of experiments: (1) empirically validating the claim regarding the variance between correct workers, and (2) validating the applicability of the methods by attacking real world networks. We experiment with attacking the following models, with and without the presence of the defenses. Following the experiments described in the state-of-the-art defenses [28, 29, 11], we consider simple architectures on the first two datasets: MNIST [16] and CIFAR10 [15]. To strengthen our claims, we also experimented on the modern Wide Res Net architecture [30] on CIFAR100. The models architectures and hyper-parameters, can be found in the supplementary materials. The models were trained with n = 51 workers, out of which m = 12 24% were corrupted and non-omniscient.
Researcher Affiliation Collaboration Moran Baruch 1 moran.baruch@biu.ac.il Gilad Baruch 1 gilad.baruch@biu.ac.il Yoav Goldberg 1 2 yogo@cs.biu.ac.il 1 Dept. of Computer Science, Bar Ilan University, Israel 2 The Allen Institute for Artificial Intelligence
Pseudocode Yes Algorithm 1: Synchronous SGD; Algorithm 2: Bulyan Algorithm; Algorithm 3: Preventing Convergence Attack; Algorithm 4: Backdoor Attack
Open Source Code No The paper does not explicitly state that source code for the methodology is provided or include a link to a code repository.
Open Datasets Yes We experiment with attacking the following models... MNIST [16] and CIFAR10 [15]. To strengthen our claims, we also experimented on the modern Wide Res Net architecture [30] on CIFAR100.
Dataset Splits No The paper mentions training and testing but does not specify explicit train/validation/test splits, percentages, or sample counts.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper states that model architectures and hyperparameters are in supplementary materials but does not list specific software dependencies with version numbers in the main text.
Experiment Setup No The models architectures and hyper-parameters, can be found in the supplementary materials.