Spatially Transformed Adversarial Examples
Authors: Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, Dawn Song
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems. This potentially provides a new direction in adversarial example generation and the design of corresponding defenses. We visualize the spatial transformation based perturbation for different examples and show that our technique can produce realistic adversarial examples with smooth image deformation. Finally, we visualize the attention of deep networks with different types of adversarial examples to better understand how these examples are interpreted. |
| Researcher Affiliation | Academia | 1University of Michigan, Ann Arbor, USA 2Massachusetts Institute of Technology, MA, USA 3University of California, Berkeley, USA |
| Pseudocode | No | No pseudocode or algorithm blocks are explicitly labeled in the paper. |
| Open Source Code | No | The paper references third-party codebases for models used (e.g., https://github.com/tensorflow/models/blob/master/research/Res Net/Res Net_model.py, https://github.com/Madry Lab/cifar10_challenge/blob/master/model.py, https://github.com/tensorflow/cleverhans/tree/master/examples/nips17_adversarial_competition/), but does not explicitly state that the code for their proposed method ('st Adv') is open-sourced or provide a direct link to it. |
| Open Datasets | Yes | We show adversarial examples with high perceptual quality for both MNIST (Le Cun & Cortes, 1998) and CIFAR-10 (Krizhevsky et al., 2014) datasets. |
| Dataset Splits | No | The paper mentions 'MNIST test data' and 'CIFAR-10 test data' but does not explicitly state the specific training/validation/test split percentages, sample counts, or refer to a formally defined split for reproducibility of the data partitioning beyond using standard test sets. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions the use of 'TensorFlow' implicitly through links to TensorFlow-based models, but does not provide specific version numbers for TensorFlow or any other software libraries or dependencies used for their implementation. |
| Experiment Setup | Yes | Experiment Setup We set τ as 0.05 for all our experiments. We use confidence κ = 0 for both C&W and st Adv for a fair comparison. We leverage L-BFGS (Liu & Nocedal, 1989) as our solver with backtracking linear search. |