Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets
Authors: Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct comprehensive transfer attacks against state-of-the-art DNNs including Res Nets, Dense Nets, Inceptions, Inception-Res Net, Squeeze-and-Excitation Network (SENet) and robustly trained DNNs. We show that employing SGM on the gradient flow can greatly improve the transferability of crafted attacks in almost all cases. |
| Researcher Affiliation | Academia | 1Tsinghua University 2Shanghai Jiao Tong University 3PCL Research Center of Networks and Communications, Peng Cheng Laboratory 4The University of Melbourne |
| Pseudocode | No | The paper describes the Skip Gradient Method (SGM) using mathematical equations (e.g., Equation 9 and 10) but does not provide a separate pseudocode block or algorithm listing. |
| Open Source Code | No | The paper states: "For proper implementation, we use open-source codes and pretrained models for our experiments, e.g., Adver Torch (Ding et al., 2019) for FGSM, PGD and MI, and source/target models from two Git Hub repositories for all models. We reproduced DI and TI in Py Torch." This indicates the authors *used* existing open-source code and reproduced some methods, but they do not state that they are providing their own source code for the Skip Gradient Method (SGM) described in the paper. |
| Open Datasets | Yes | We first conduct a toy experiment with the BIM attack and Res Net-18 on the Image Net validation dataset (Deng et al., 2009)... All models were trained on Image Net training set. |
| Dataset Splits | Yes | We randomly select 5000 Image Net validation images that are correctly classified by all source models, and craft untargeted attacks under maximum L perturbation ϵ = 16, which is a typical black-box setting (Dong et al., 2018; Xie et al., 2019; Dong et al., 2019). |
| Hardware Specification | No | The paper does not specify the hardware used for running the experiments, such as specific GPU or CPU models, or cloud computing instances. |
| Software Dependencies | No | The paper mentions using "Adver Torch (Ding et al., 2019)" and that some methods were reproduced in "Py Torch". However, it does not provide specific version numbers for these software components (e.g., PyTorch 1.x or Adver Torch 0.y). |
| Experiment Setup | Yes | The iteration step is set to 10 and 20 for unsecured and secured target models respectively. For all iterative methods PGD, TI and our SGM, the step size is set to α = 2. For our proposed SGM, the decay parameter is set to γ = 0.2 (0.5) and γ = 0.5 (0.7) on Res Net and Dense Net source models in PGD (FGSM) respectively. For all attack methods, we follow the standard setting (Dong et al., 2018; Xie et al., 2019) to craft untargeted attacks under maximum L perturbation ϵ = 16 with respect to pixel values in [0, 255]. |