Online Adversarial Attacks

Authors: Andjela Mladenovic, Joey Bose, Hugo Berard, William L. Hamilton, Simon Lacoste-Julien, Pascal Vincent, Gauthier Gidel

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we complement our theoretical results by conducting experiments on MNIST, CIFAR-10, and Imagenet classifiers, revealing the necessity of online algorithms in achieving near-optimal performance and also the rich interplay between attack strategies and online attack selection, enabling simple strategies like FGSM to outperform stronger adversaries.
Researcher Affiliation Collaboration Andjela Mladenovic Mila, Université de Montréal Avishek Joey Bose Mila, Mc Gill University Hugo Berard Mila, Université de Montréal William L. Hamilton Mila, Mc Gill University Simon Lacoste-Julien Mila, Université de Montréal Pascal Vincent Mila, Université de Montréal Meta AI Research Gauthier Gidel Mila, Université de Montréal
Pseudocode Yes Algorithm 1 VIRTUAL and VIRTUAL+ Inputs: t [k . . . n k], R = , SA = Sampling phase: Observe the first t data points and construct a sorted list R with the indices of the top k data points seen. The method sort ensures: V(R[1]) V(R[2]) V(R[k]). Selection phase:{//VIRT+ removes L2-3 and adds L4 }
Open Source Code Yes Code can be found at: https://github.com/facebookresearch/Online Attacks
Open Datasets Yes We perform experiments on the MNIST Le Cun & Cortes (2010) and CIFAR-10 Krizhevsky (2009) datasets where we simulate a D by generating 1000 permutations of the test set and feeding each instantiation to Alg. 2.
Dataset Splits No The paper uses pre-trained models and simulates data streams on the test sets of MNIST and CIFAR-10, but it does not specify the training, validation, and test dataset splits used to train these models or for their own experimental setup.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, only a general acknowledgment of 'access to computational resources'.
Software Dependencies No The paper describes algorithms and experimental setup but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or other libraries).
Experiment Setup Yes For a given permutation π and an attack method (FGSM or PGD), we compute the online fool rate of the NAIVE baseline and an A as F NAIVE π , F A π respectively. In Fig. 4, we uniformly sample 20 permutations πi Sn, i [n], of D and plot a scatter graph of points with coordinates (F NAIVE πi , F A πi), for different A s, attacks with k = 1000, and datasets.