Manipulating SGD with Data Ordering Attacks

Authors: I Shumailov, Zakhar Shumaylov, Dmitry Kazhdan, Yiren Zhao, Nicolas Papernot, Murat A. Erdogdu, Ross J Anderson

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively evaluate our attacks on computer vision and natural language benchmarks to find that the adversary can disrupt model training and even introduce backdoors.
Researcher Affiliation Academia Ilia Shumailov University of Cambridge & University of Toronto & Vector Institute ilia.shumailov@cl.cam.ac.uk Zakhar Shumaylov University of Cambridge zs334@cam.ac.uk Dmitry Kazhdan University of Cambridge dk525@cam.ac.uk Yiren Zhao University of Cambridge yiren.zhao@cl.cam.ac.uk Nicolas Papernot University of Toronto & Vector Institute nicolas.papernot@utoronto.ca Murat A. Erdogdu University of Toronto & Vector Institute erdogdu@cs.toronto.edu Ross Anderson University of Cambridge & University of Edinburgh ross.Anderson@cl.cam.ac.uk
Pseudocode Yes Algorithm 1: A high level description of the BRRR attack algorithm
Open Source Code Yes Codebase is available here: https://github.com/iliaishacked/sgd_datareorder
Open Datasets Yes We evaluate our attacks using two computer vision and one natural language benchmarks: the CIFAR-10, CIFAR-100 [19] and AGNews [37] datasets.
Dataset Splits No The paper discusses training and testing, and uses phrases like 'test dataset loss' and 'Test acc' in its tables, but it does not explicitly define or refer to a 'validation set' or provide specific percentages/counts for train/validation/test splits.
Hardware Specification No The paper does not explicitly describe the hardware used for experiments, such as specific GPU or CPU models, memory, or cloud computing instance types.
Software Dependencies No The paper mentions 'torchtext' for the AGNews model but does not specify version numbers for any software dependencies used in the experiments.
Experiment Setup Yes For CIFAR-10, we used 100 epochs of training with target model Res Net18 and surrogate model Le Net5, both trained with the Adam optimizer 0.1 learning rate and β = (0.99, 0.9). For CIFAR-100, we used 200 epochs of training with target model Res Net50 and surrogate model Mobilenet, trained with SGD with 0.1 learning rate, 0.3 moment and Adam respectively for real and surrogate models. AGNews were trained with SGD learning rate 0.1, 0 moments for 50 epochs with sparse mean Embedding Bags.