Adversarial Examples Are Not Bugs, They Are Features
Authors: Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To corroborate our theory, we show that it is possible to disentangle robust from non-robust features in standard image classification datasets. Specifically, given a training dataset, we construct: 1. A robustified version for robust classification (Figure 1a). ... 2. A non-robust version for standard classification (Figure 1b). ... The results (Figure 2b) indicate that the classifier learned using the new dataset attains good accuracy in both standard and adversarial settings (see additional evaluation in Appendix D.2.). |
| Researcher Affiliation | Academia | Andrew Ilyas MIT ailyas@mit.edu; Shibani Santurkar MIT shibani@mit.edu; Dimitris Tsipras MIT tsipras@mit.edu; Logan Engstrom MIT engstrom@mit.edu; Brandon Tran MIT btran115@mit.edu; Aleksander M adry MIT madry@mit.edu |
| Pseudocode | Yes | We provide pseudocode for the construction in Figure 5 (Appendix C). ... (pseudocode in Appendix C Figure 6). |
| Open Source Code | No | The corresponding datasets for CIFAR-10 are publicly available at http://git.io/adv-datasets. This link is for datasets, not the source code for the methodology. |
| Open Datasets | Yes | The corresponding datasets for CIFAR-10 are publicly available at http://git.io/adv-datasets. |
| Dataset Splits | No | The paper refers to 'training set' and 'test set' for CIFAR-10, but does not explicitly mention or detail validation dataset splits, proportions, or specific sample counts for reproduction. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types). |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | No | The paper describes the general training process but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings. |