Enhancing the Transferability of Adversarial Examples with Random Patch
Authors: Yaoyuan Zhang, Yu-an Tan, Tian Chen, Xinrui Liu, Quanxin Zhang, Yuanzhang Li
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results demonstrate the effectiveness of the proposed RPA. Compared to the state-of-the-art transferable attacks, our attacks improve the black-box attack success rate by 2.9% against normally trained models, 4.7% against defense models, and 4.6% against vision transformers on average, reaching a maximum of 99.1%, 93.2%, and 87.8%, respectively. |
| Researcher Affiliation | Academia | Yaoyuan Zhang1 , Yu-an Tan2 , Tian Chen2 , Xinrui Liu1 , Quanxin Zhang1 and Yuanzhang Li ,1 1School of Computer Science and Technology, Beijing Institute of Technology 2School of Cyberspace Science and Technology, Beijing Institute of Technology {yaoyuan, tan2008, chentian20, 3220200923, zhangqx, popular}@bit.edu.cn |
| Pseudocode | Yes | Algorithm 1 Random Patch Attack |
| Open Source Code | Yes | Code is available at: https://github.com/alwaysfoggy/RPA. |
| Open Datasets | Yes | We follow the previous works [Gao et al., 2020; Wang et al., 2021b] to conduct our experiments on the Image Net-compatible dataset 1, containing 1000 images used for the NIPS 2017 adversarial competition. 1https://github.com/cleverhans-lab/cleverhans/tree/master/ cleverhans v3.1.0/examples/nips17 adversarial competition/dataset |
| Dataset Splits | No | The paper uses the Image Net-compatible dataset containing 1000 images, but does not explicitly state the training, validation, and test splits (e.g., percentages or sample counts) needed for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers (e.g., Python, PyTorch, or other libraries with their versions). |
| Experiment Setup | Yes | For the settings of parameters, we set the maximum perturbation to be ϵ = 16, the number of iteration as T = 10 and the step size as α = ϵ/T = 1.6 among all experiments. For the proposed RPA, the patch size alternately sets n = 1, 3, 5, 7, the ensemble number N is set to 60, the modify probability pm is set to 0.3, 0.2 and 0.2 when attacking normally trained models, defense models and vision transformers, respectively. We set random seed in our experiments to guarantee that the proposed method is reproducible, the random seed is 1234. |