Generating Adversarial Examples with Adversarial Networks
Authors: Chaowei Xiao, Bo Li, Jun-yan Zhu, Warren He, Mingyan Liu, Dawn Song
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we first evaluate Adv GAN for both semiwhitebox and black-box settings on MNIST [Le Cun and Cortes, 1998] and CIFAR-10 [Krizhevsky and Hinton, 2009]. We also perform a semi-whitebox attack on the Image Net dataset [Deng et al., 2009]. We then apply Adv GAN to generate adversarial examples on different target models and test the attack success rate for them under the state-of-the-art defenses and show that our method can achieve higher attack success rates compared to other existing attack strategies. We generate all adversarial examples for different attack methods under an L bound of 0.3 on MNIST and 8 on CIFAR-10, for a fair comparison. In general, as shown in Table 1, Adv GAN has several advantages over other white-box and blackbox attacks. |
| Researcher Affiliation | Academia | 1University of Michigan, Ann Arbor 2University of California, Berkeley 3Massachusetts Institute of Technology |
| Pseudocode | No | The paper describes processes such as dynamic distillation in a step-by-step manner, but it does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper provides links to third-party resources and challenges (e.g., 'https://github.com/Madry Lab/mnist_challenge', 'github.com/tensorflow/models/blob/master/research/Res Net'), but it does not include an explicit statement or link for the open-source code of their own proposed methodology (Adv GAN). |
| Open Datasets | Yes | In this section, we first evaluate Adv GAN for both semiwhitebox and black-box settings on MNIST [Le Cun and Cortes, 1998] and CIFAR-10 [Krizhevsky and Hinton, 2009]. We also perform a semi-whitebox attack on the Image Net dataset [Deng et al., 2009]. |
| Dataset Splits | No | The paper mentions using well-known datasets like MNIST and CIFAR-10 which have standard train/test splits. However, it does not explicitly provide specific percentages or details for training, validation, and test splits used in their experiments, nor does it refer to a predefined validation split from a citation or external source. |
| Hardware Specification | No | The paper does not include any specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, memory, or cloud instance specifications). |
| Software Dependencies | No | The paper mentions 'Tensor Flow' and 'Adam as our solver', but it does not specify version numbers for these or any other software dependencies needed to replicate the experiment. |
| Experiment Setup | Yes | We use Adam as our solver [Kinga and Adam, 2015], with a batch size of 128 and a learning rate of 0.001. We set the confidence κ = 0 for both Opt. and Adv GAN. We generate all adversarial examples for different attack methods under an L bound of 0.3 on MNIST and 8 on CIFAR-10, for a fair comparison. |