Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
Authors: Jiadong Lin, Chuanbiao Song, Kun He, Liwei Wang, John E. Hopcroft
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on Image Net dataset demonstrate that our attack methods exhibit higher transferability and achieve higher attack success rates than state-of-the-art gradient-based attacks. |
| Researcher Affiliation | Academia | Jiadong Lin & Chuanbiao Song & Kun He School of Computer Science and Technology Huazhong University of Science and Technology Wuhan, 430074, China {jdlin,cbsong,brooklet60}@hust.edu.cn Liwei Wang School of Electronics Engineering and Computer Sciences, Peking University Peking, China wanglw@cis.pku.edu.cn John E. Hopcroft Department of Computer Science Cornell University, NY 14853, USA jeh@cs.cornell.edu |
| Pseudocode | Yes | Algorithm 1 SI-NI-FGSM ... Algorithm 2 SI-NI-TI-DIM |
| Open Source Code | Yes | Code is available at https://github.com/JHL-HUST/SI-NI-FGSM. |
| Open Datasets | Yes | Dataset. We randomly choose 1000 images belonging to the 1000 categories from ILSVRC 2012 validation set, which are almost correctly classified by all the testing models. ... Extensive experiments on the Image Net dataset (Russakovsky et al., 2015) |
| Dataset Splits | No | The paper mentions using 1000 images from the ILSVRC 2012 validation set as their dataset for experiments. However, it does not provide details on how this dataset was further split into training, validation, and test sets for their own experimental process of generating adversarial examples or evaluating transferability, beyond stating it's from the validation set. |
| Hardware Specification | No | No specific hardware details (e.g., GPU models, CPU types, memory) are mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) are mentioned in the paper. |
| Experiment Setup | Yes | Hyper-parameters. For the hyper-parameters, we follow the settings in (Dong et al., 2018) with the maximum perturbation as ϵ = 16, number of iteration T = 10, and step size α = 1.6. For MI-FGSM, we adopt the default decay factor µ = 1.0. For DIM, the transformation probability is set to 0.5. For TIM, we adopt the Gaussian kernel and the size of the kernel is set to 7 × 7. For our SI-NI-FGSM, the number of scale copies is set to m = 5. |