Constructing Unrestricted Adversarial Examples with Generative Models
Authors: Yang Song, Rui Shu, Nate Kushman, Stefano Ermon
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical results on the MNIST, SVHN, and Celeb A datasets show that unrestricted adversarial examples can bypass strong adversarial training and certiļ¬ed defense methods designed for traditional adversarial attacks. |
| Researcher Affiliation | Collaboration | Yang Song Stanford University yangsong@cs.stanford.edu Rui Shu Stanford University ruishu@cs.stanford.edu Nate Kushman Microsoft Research nkushman@microsoft.com Stefano Ermon Stanford University ermon@cs.stanford.edu |
| Pseudocode | Yes | In what follows, we explore two attacks derived from variants of AC-GAN (see pseudocode in Appendix B). |
| Open Source Code | No | The paper does not contain any explicit statement or link indicating the availability of the source code for the described methodology. |
| Open Datasets | Yes | The datasets used in our experiments are MNIST [25], SVHN [26], and Celeb A [27]. |
| Dataset Splits | No | The paper mentions using training and test partitions but does not provide specific percentages or counts for training, validation, and test splits needed for reproduction, nor does it refer to a standard split that quantifies all partitions. |
| Hardware Specification | No | The paper does not specify the exact models or types of hardware (e.g., GPUs, CPUs, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like TensorFlow (Appendix C: 'We implement all models in TensorFlow.') but does not provide specific version numbers for these or any other key libraries or solvers. |
| Experiment Setup | Yes | For more details about architectures, hyperparameters and adversarial training methods, please refer to Appendix C. |