A Unified Approach to Interpreting and Boosting Adversarial Transferability

Authors: Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, Quanshi Zhang

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comparative studies are conducted to verify this negative correlation through different DNNs.Using 50 images randomly sampled from the validation set of the Image Net dataset (Russakovsky et al., 2015), we generate adversarial perturbations on four types of DNNs, including Res Net34/152(RN-34/152) (He et al., 2016) and Dense Net-121/201(DN-121/201) (Huang et al., 2017).
Researcher Affiliation Academia a Shanghai Jiao Tong University b Key Lab. of Machine Perception (Mo E), School of EECS, Peking University, Beijing, China
Pseudocode No The paper describes algorithmic steps using mathematical formulas (e.g., Equation 9 in Appendix C) but does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Our code is available online1. 1https://github.com/xherdan76/A-Unified-Approach-to-Interpreting-and Boosting-Adversarial-Transferability
Open Datasets Yes Using 50 images randomly sampled from the validation set of the Image Net dataset (Russakovsky et al., 2015)
Dataset Splits Yes Using 50 images randomly sampled from the validation set of the Image Net dataset (Russakovsky et al., 2015)
Hardware Specification Yes The time cost was measured using Py Torch 1.6 (Paszke et al., 2019) on Ubuntu 18.04, with the Intel(R) Core(TM) i7-9800X CPU @ 3.80GHz and a Titan RTX GPU.
Software Dependencies Yes The time cost was measured using Py Torch 1.6 (Paszke et al., 2019) on Ubuntu 18.04, with the Intel(R) Core(TM) i7-9800X CPU @ 3.80GHz and a Titan RTX GPU.
Experiment Setup Yes All attacks were conducted with 100 steps3 on randomly selected 1000 images of the validation set in the Image Net dataset. We set ϵ = 16/255 for the L attack, and set ϵ = 16/255 n following the setting in (Dong et al., 2018) for the L2 attack. The step size was set to 2/255 for all attacks. Considering the efficiency of signal processing in DNNs with different depths, we set λ = 1 for the IR Attack, when the source DNN was Res Net. We set λ = 2, for other source DNNs.