Boosting Adversarial Transferability by Achieving Flat Local Maxima

Authors: Zhijin Ge, Hongying Liu, Wang Xiaosen, Fanhua Shang, Yuanyuan Liu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on the Image Net-compatible dataset show that the proposed method can generate adversarial examples at flat local regions, and significantly improve the adversarial transferability on either normally trained models or adversarially trained models than the state-of-the-art attacks.In this section, we conduct extensive experiments on the Image Net-compatible dataset.
Researcher Affiliation Collaboration Zhijin Ge1 , Hongying Liu2 , Xiaosen Wang3 , Fanhua Shang4 , Yuanyuan Liu1 1School of Artificial Intelligence, Xidian University 2Medical College, Tianjin University, China 3Huawei Singular Security Lab 4College of Intelligence and Computing, Tianjin University
Pseudocode Yes Algorithm 1 Penalizing Gradient Norm (PGN) attack method
Open Source Code Yes Our codes are available at: https://github.com/Trustworthy-AI-Group/PGN.
Open Datasets Yes We conduct our experiments on the Image Net-compatible dataset, which is widely used in previous works [4, 35, 42]. It contains 1,000 images with the size of 299 299 3, ground-truth labels, and target labels for targeted attacks.
Dataset Splits No The paper mentions using 'Image Net-compatible dataset' and 'CIFAR-10', but does not specify any training, validation, or test dataset splits (e.g., percentages or sample counts for each split).
Hardware Specification Yes These experiments were conducted using codes executed on an RTX 2080 Ti with a CUDA environment.
Software Dependencies No The paper mentions a 'CUDA environment' but does not specify its version number or any other software dependencies (e.g., libraries, frameworks) with specific version information.
Experiment Setup Yes We set the maximum perturbation of the parameter ϵ = 16.0/255, the number of iterations T = 10, and the step size α = ϵ/T. For MI-FGSM and NI-FGSM, we set the decay factor µ = 1.0. For VMI-FGSM, we set the number of sampled examples N = 20 and the upper bound of neighborhood size β = 1.5 ϵ. For EMI-FGSM, we set the number of examples N = 11, the sampling interval bound η = 7, and adopt the linear sampling. For the attack method, RAP, we set the step size α = 2.0/255, the number of iterations K = 400, the inner iteration number T = 10, the late-start KLS = 100, the size of neighborhoods ϵn = 16.0/255. For our proposed PGN, we set the number of examples N = 20, the balanced coefficient δ = 0.5, and the upper bound of ζ = 3.0 ϵ.