Rethinking Adversarial Transferability from a Data Distribution Perspective
Authors: Yao Zhu, Jiacheng Sun, Zhenguo Li
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct comprehensive transferable attacks against multiple DNNs and show that our IAA can boost the transferability of the crafted attacks in all cases and go beyond state-of-the-art methods. |
| Researcher Affiliation | Collaboration | Yao Zhu1 , Jiacheng Sun2 , Zhenguo Li2 1Zhejiang University, 2Huawei Noah s Ark Lab |
| Pseudocode | Yes | Algorithm 1 Intrinsic Adversarial Attack (IAA) |
| Open Source Code | No | We plan to open the source code to reproduce the main experimental results later. |
| Open Datasets | Yes | randomly selected 5000 Image Net validation images, Image Net evaluation datasets, All the source models are trained on Image Net training set. |
| Dataset Splits | Yes | randomly selected 5000 Image Net validation images, All the source models are trained on Image Net training set., Image Net evaluation datasets. |
| Hardware Specification | Yes | All experiments in this paper are run on Tesla V100. |
| Software Dependencies | No | We use the pre-trained models in Py Torch (Paszke et al., 2019). and The scikit-optimize 1 is a simple and efficient library to minimize black-box functions... No specific version numbers for these software components are provided in the text. |
| Experiment Setup | Yes | We constrain the adversarial perturbation within the ℓ ball of radius ϵ = 16/255 with respect to pixel value in [0, 1] and set the step size α to 2/255. The iteration steps in all the experiments are set to 10. and For Res Net-50, the search results are β = 20, and λ1 applied to the residual modules in Block1 is 0.98, λ2 applied to the residual modules in Block1 is 0.87, λ3 applied to the residual modules in Block3 is 0.73, λ4 applied to the residual modules in Block4 is 0.19. |