Attacking Transformers with Feature Diversity Adversarial Perturbation
Authors: Chenxing Gao, Hang Zhou, Junqing Yu, YuTeng Ye, Jiale Cai, Junle Wang, Wei Yang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments to test our method on Vi T-based models, CNN models, and MLP models. Furthermore, we assess the cross-task transferability of our attack method. |
| Researcher Affiliation | Collaboration | Chenxing Gao1, Hang Zhou1, Junqing Yu1, Yu Teng Ye1, Jiale Cai1, Wei Yang1* 1Huazhong University of Science and Technology, Wuhan, China Junle Wang2 2Tencent |
| Pseudocode | Yes | Algorithm 1: Feature Diversity Adversarial Perturbation on Vi Ts |
| Open Source Code | No | The paper does not include any explicit statement about releasing source code or provide a link to a code repository. |
| Open Datasets | Yes | Dataset: Similar to the settings in Dong(Dong et al. 2018), we randomly select data 1000 images from the validation set Image Net 2012 (Russakovsky et al. 2015). |
| Dataset Splits | No | The paper mentions using 1000 images from the validation set of ImageNet 2012, but it does not provide specific train/validation/test dataset splits for its own experimental setup. |
| Hardware Specification | No | The computation is completed in the HPC Platform of Huazhong University of Science and Technology. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers for its dependencies. |
| Experiment Setup | Yes | Attack settings: we conduct attacks using a maximum perturbation value of ϵ = 16, the total number of attack iterations is N = 30, and the step size α = 3/255. |