Adversarial Example Games

Authors: Joey Bose, Gauthier Gidel, Hugo Berard, Andre Cianflone, Pascal Vincent, Simon Lacoste-Julien, Will Hamilton

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets, outperforming prior state-of-the-art approaches with an average relative improvement of 29.9% and 47.2% against undefended and robust models (Table 2 & 3) respectively.
Researcher Affiliation Collaboration Avishek Joey Bose Mila, Mc Gill University joey.bose@mail.mcgill.ca Gauthier Gidel Mila, Université de Montréal gauthier.gidel@umontreal.ca Hugo Berard Mila, Université de Montréal Facebook AI Research Andre Cianflone Mila, Mc Gill University Pascal Vincent Mila, Université de Montréal Facebook AI Research Simon Lacoste-Julien Mila, Université de Montréal William L. Hamilton Mila, Mc Gill University
Pseudocode No The paper includes diagrams (e.g., Figure 2: AEG framework architecture) and descriptions of processes, but it does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks with structured steps.
Open Source Code Yes Code: https://github.com/joeybose/Adversarial-Example-Games.git
Open Datasets Yes We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets...
Dataset Splits No The paper describes creating 'random equally-sized splits of the data (10000 examples per splits)' for evaluation in a No Box setting, where 'one fold to train the split classifier' and evaluating on 'remaining split classifiers on unseen target examples D'. While this implies training and testing, it does not explicitly provide distinct training, validation, and test dataset splits with specific percentages or sample counts.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using 'Extra Adam optimizer' and 'SGD with Armijo line search', and PyTorch is listed in the references, but no specific version numbers for any software dependencies are provided to ensure reproducibility.
Experiment Setup Yes We perform all attacks, including baselines, with respect to the ℓ norm constraint with ϵ = 0.3 for MNIST and ϵ = 0.03125 for CIFAR-10. Full details of our model architectures, including hyperparameters, employed in our AEG framework can be found in Appendix D.