ILA-DA: Improving Transferability of Intermediate Level Attack with Data Augmentation
Authors: Chiu Wai Yan, Tsz-Him Cheung, Dit-Yan Yeung
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Shown by extensive experiments, ILA-DA greatly outperforms ILA and other state-of-the-art attacks by a large margin. On Image Net, we attain an average attack success rate of 84.5%, which is 19.5% better than ILA and 4.7% better than the previous state-of-the-art across nine undefended models. 4 EXPERIMENTAL RESULTS |
| Researcher Affiliation | Academia | Chiu Wai YAN, Tsz-Him CHEUNG & Dit-Yan YEUNG Department of Computer Science and Engineering The Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong {cwyan, thcheungae}@connect.ust.hk, dyyeung@cse.ust.hk |
| Pseudocode | Yes | The algorithmic details for applying these augmentation techniques in ILA-DA are depicted in Algorithm 1. Algorithm 1 ILA-DA |
| Open Source Code | Yes | The code is available at https://github.com/argenycw/ILA-DA. |
| Open Datasets | Yes | All the models are pretrained on Image Net (Russakovsky et al., 2015), with the model parameters of PNASNet1 and SENet2 obtained from public repositories and the remaining from Torchvision (Paszke et al., 2019). For CIFAR-10, we follow Li et al. (2020b)... For both datasets, we randomly sample 5000 images from the test set that are classified correctly by all four models and we pick VGG19 to be the source model. |
| Dataset Splits | Yes | To measure the attack success rate, we randomly sample 5000 images from the ILSVRC2012 validation set with all images being classified correctly by the nine models. |
| Hardware Specification | Yes | All the experiments are performed on Nvidia A100 GPU. |
| Software Dependencies | No | The paper mentions software like Torchvision (Paszke et al., 2019) and TIMM (Wightman, 2019) but does not provide specific version numbers for these or other key software components, such as Python or PyTorch. |
| Experiment Setup | Yes | The default number of iterations of I-FGSM is 10 and the attack step size is set to max( 1/255, ϵ no. of iterations). For the choice of the intermediate layer, we opt the layer 3-1 for Res Net50, layer 9 for VGG19, and layer 6a for Inception V3, where the former two have been shown to result in good performance by Li et al. (2020b). To mount a complete attack, we first run I-FGSM for 10 iterations on the source model, and then pass the example as the reference attack to ILA-DA to perform fine-tuning for 50 iterations. The same model is used as both the source model of I-FGSM and surrogate model of ILA-DA. The results of the attacks on undefended models are shown in Table 1, with the details of the hyper-parameters listed in Appendix F. Table 8: Hyper-parameters used in the baselines. |