Making Adversarial Examples More Transferable and Indistinguishable

Authors: Junhua Zou, Yexin Duan, Boyu Li, Wu Zhang, Yu Pan, Zhisong Pan3662-3670

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on an Image Net-compatible dataset show that our method generates more indistinguishable adversarial examples and achieves higher attack success rates without extra running time and resource.
Researcher Affiliation Academia 1Command and Control Engineering College, Army Engineering University, Nanjing 210007, China 2School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China
Pseudocode Yes We summarize NI-TI-DI-AITM as the combination of AIFGTM, NIM, TIM and DIM, and the procedure is given in Algorithm 1.
Open Source Code No The paper does not provide concrete access to source code for the methodology described. The only link provided is for a dataset.
Open Datasets Yes We utilize 1000 images 1 which are used in the NIPS 2017 adversarial competition to conduct the following experiments. 1https://github.com/tensorflow/cleverhans/tree/master/ examples/nips17 adversarial competition/dataset
Dataset Splits No The paper does not specify explicit training/validation/test splits with percentages or counts. It mentions using 1000 images for experiments but no explicit breakdown of data for training or validation.
Hardware Specification Yes We compare the running time of each attack mentioned in Table 1 using a piece of Nvidia GPU GTX 1080 Ti.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers. It mentions general tools or frameworks like Foolbox, PyTorch, TensorFlow, and JAX in references but not with versions for their own implementation.
Experiment Setup Yes According to TI-DIM (Dong et al. 2019) and NI-FGSM (Lin et al. 2020), we set the maximum perturbation ε = 16, and the number of iteration T = 10. Specifically, we set the kernel size to 15 15 in normal TIDIM and NI-TI-DIM while 9 9 in TI-DI-AITM.