Adversarial Examples Are a Natural Consequence of Test Error in Noise

Authors: Justin Gilmer, Nicolas Ford, Nicholas Carlini, Ekin Cubuk

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide both empirical and theoretical evidence that these are two manifestations of the same underlying phenomenon. We establish close connections between the adversarial robustness and corruption robustness research programs, with the strongest connection in the case of additive Gaussian noise. This suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. Based on our results we recommend that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Image Net-C.
Researcher Affiliation Industry 1Google Brain 2This work was completed as part of the Google AI Residency. Correspondence to: Nicolas Ford <nicf@google.com>, Justin Gilmer <gilmer@google.com>.
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any specific links or statements about the availability of open-source code for the methods described in this paper.
Open Datasets Yes In this work we will investigate several different models trained on the CIFAR-10 and Image Net datasets.
Dataset Splits No The paper mentions using
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup Yes The PGD attack was run with ϵ = 1, so the distances to the decision boundary reported here are cut off at 1. The PGD attack was run with 200 steps on each sample. For all the robustness tests we used PGD with 100 steps and a step size of ϵ/25.