Rethinking the Backward Propagation for Adversarial Transferability

Authors: Wang Xiaosen, Kangheng Tong, Kun He

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on the Image Net dataset demonstrate that not only does our method substantially boost the adversarial transferability, but it is also general to existing transfer-based attacks. Code is available at https://github.com/Trustworthy-AI-Group/RPA.
Researcher Affiliation Collaboration Xiaosen Wang1 , Kangheng Tong2 , Kun He2 1Huawei Singular Security Lab 2School of Computer Science and Technology, Huazhong University of Science and Technology {xiaosen,tongkangheng,brooklet60}@hust.edu.cn
Pseudocode No The paper describes the proposed method but does not include any pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/Trustworthy-AI-Group/RPA.
Open Datasets Yes Following Lin BP [12], we randomly sample 5,000 images pertaining to the 1,000 categories from ILSVRC 2012 validation set [35], which could be classified correctly by all the victim models.
Dataset Splits No The paper describes using a sample of 5,000 images from the ILSVRC 2012 validation set for generating and evaluating adversarial examples. However, it does not specify a separate training or validation split for the adversarial attack methodology itself in the conventional sense (e.g., percentages or counts for distinct train/validation/test sets for the adversarial process).
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or memory.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions).
Experiment Setup Yes Hyper-parameters. We adopt the maximum magnitude of perturbation ϵ = 8/255 to align with existing works. We run the attacks in T = 10 iterations with step size α = 1.6/255 for untargeted attacks and T = 300 iterations with step size α = 1/255 for targeted attacks. We set the momentum decay factor µ = 1.0 and sample 20 examples for VMI-FGSM. The number of spectrum transformations and tuning factor is set to N = 20 and ρ = 0.5, respectively. The decay factor for SGM is γ = 0.5 and the random range of Ghost network is λ = 0.22. We follow the setting of Lin BP to modify the backward propagation of Re LU in the last eight residual blocks of Res Net-50. We set the temperature coefficient t = 10 for Res Net-50 and t = 1 for VGG-19.