Improving Transferability of Adversarial Examples with Virtual Step and Auxiliary Gradients

Authors: Ming Zhang, Xiaohui Kuang, Hu Li, Zhendong Wu, Yuanping Nie, Gang Zhao

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on Image Net show that the adversarial examples crafted by our method can effectively transfer to different networks. For singlemodel attacks, our method outperforms the stateof-the-art baselines, improving the success rates by a large margin of 12% 28%.
Researcher Affiliation Academia Ming Zhang, Xiaohui Kuang, Hu Li , Zhendong Wu, Yuanping Nie, Gang Zhao National Key Laboratory of Science and Technology on Information System Security, Beijing, China zm_stiss@163.com, xhkuang@bupt.edu.cn, {lihu, wuzhendong, yuanpingnie}@nudt.edu.cn, zemell@foxmail.com
Pseudocode Yes Algorithm 1 VA-I-FGSM for crafting adversarial examples. Input: A classifier f with loss function J; a benign example x and its true label ytrue; the label set C; the number of iterations T; the perturbation threshold ϵ; the virtual step size α; the number of auxiliary labels naux. Output: The adversarial example xadv. 1: Let xadv 0 x; t 0. 2: while t < T do 3: xadv tmp xadv t + α sign( x J(xadv t , ytrue)) 4: Caux Random Select(C\ytrue, naux) 5: for yaux in Caux do 6: xadv tmp xadv tmp α sign( x J(xadv t , yaux)) 7: end for 8: xadv t+1 xadv tmp 9: t t + 1 10: end while 11: return xadv Clipx,ϵ{xadv T }
Open Source Code Yes Our code is publicly available at https://github.com/mingcheung/Virtual-Step-and-Auxiliary-Gradients.
Open Datasets Yes We use a subset dataset2 of Image Net to conduct the experiments. This subset dataset consists of 1000 images and was used in the NIPS 2017 adversarial competition. 2https://www.kaggle.com/c/nips-2017-non-targeted-adversarial-attack/data
Dataset Splits No The paper mentions using a subset of ImageNet consisting of 1000 images but does not explicitly state the training, validation, or test dataset splits or percentages used in their experiments.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions Keras implicitly through a link to its applications API, but does not provide specific software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions).
Experiment Setup Yes For all attacks, the maximum perturbation of each pixel is set to ϵ = 16. The total number of iterations is set to T = min(ϵ + 4, 1.25ϵ) [Kurakin et al., 2017]. For I-FGSM, DI2-FGSM and TI2-FGSM, the step size is set to α = ϵ/T. For DI2-FGSM, the transformation operations T(x; p) first randomly resize the input to a rnd rnd 3 image, with rnd [299, 330), then pad to size 330 330 3 in a random manner. The transformation probability p is set to be 0.5. For TI2-FGSM, W is set to be a 15 15 Gaussian kernel. In experiments, the pixel values of all images are scaled to [0, 1]. Correspondingly, the ϵ is scaled to 16/255. ...For VA-I-FGSM, we set α = 0.007 and naux = 3 according to the majority rule. Similarly, we have searched the optimal hyperparameters for VA-DI2-FGSM and VA-TI2-FGSM. For VA-DI2-FGSM, the hyperparameters are set to α = 0.009 and naux = 1; for VA-TI2-FGSM, the hyperparameters are set to α = 0.009 and naux = 4.