Rethinking Model Ensemble in Transfer-based Adversarial Attacks
Authors: Huanran Chen, Yichi Zhang, Yinpeng Dong, Xiao Yang, Hang Su, Jun Zhu
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments to confirm the superior transferability of adversarial examples generated by our methods. We first verify in image classification for 31 victim models with various architectures (e.g., CNNs (He et al., 2016), Transformers (Dosovitskiy et al., 2020; Liu et al., 2021)) and training settings (e.g., standard training, adversarial training (Salman et al., 2020; Wong et al., 2020), input purification (Naseer et al., 2020; Nie et al., 2022)). |
| Researcher Affiliation | Collaboration | Huanran Chen1,2, Yichi Zhang2,3, Yinpeng Dong2,3 , Xiao Yang2, Hang Su2, Jun Zhu2,3 1School of Computer Science, Beijing Institute of Technology 2Dept. of Comp. Sci. and Tech., Institute for AI, Tsinghua-Bosch Joint ML Center, THBI Lab, BNRist Center, Tsinghua University, Beijing, 100084, China 3Real AI |
| Pseudocode | Yes | Algorithm 1 MI-CWA algorithm Require: natural image xnat, label y, perturbation budget ϵ, iterations T, loss function L, model ensemble Ft = {fi}n i=1, decay factor µ, step sizes r, β and α. 1: Initialize: m = 0, inner momentum ˆm = 0, x0 = xnat; |
| Open Source Code | Yes | Code is available at https://github.com/huanranchen/Adversarial Attacks. |
| Open Datasets | Yes | Similar to previous works, we adopt the NIPS2017 dataset1, which is comprised of 1000 images compatible with Image Net (Russakovsky et al., 2015). |
| Dataset Splits | No | The paper states using the NIPS2017 dataset with 1000 images and ImageNet compatibility, but it does not specify explicit training/validation/test splits, percentages, or sample counts for reproduction. |
| Hardware Specification | No | The paper mentions real-time cost analysis but does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running experiments. |
| Software Dependencies | No | The paper mentions software components like 'Torch Vision', 'Robust Bench', and 'TensorFlow model garden' but does not specify their version numbers or other software dependencies with their respective versions. |
| Experiment Setup | Yes | Hyper-parameters: We set the perturbation budget ϵ = 16/255, total iteration T = 10, decay factor µ = 1, step sizes β = 50, r = 16/255/15, and α = 16/255/5. For compared methods, we employ their optimal hyper-parameters as reported in their respective papers. |