Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

On the Adversarial Robustness of Vision Transformers

Authors: Rulin Shao, Zhouxing Shi, Jinfeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

TMLR 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This work provides a comprehensive study on the robustness of vision transformers (Vi Ts) against adversarial perturbations. Tested on various white-box and transfer attack settings, we find that Vi Ts possess better adversarial robustness when compared with MLP-Mixer and convolutional neural networks (CNNs)...
Researcher Affiliation Collaboration Rulin Shao EMAIL Carnegie Mellon University; Zhouxing Shi EMAIL University of California, Los Angeles; Jinfeng Yi EMAIL JD AI Research; Pin-Yu Chen EMAIL IBM Research; Cho-Jui Hsieh EMAIL University of California, Los Angeles
Pseudocode No The paper does not contain any sections or figures explicitly labeled 'Pseudocode' or 'Algorithm'.
Open Source Code Yes Codes available at https://github.com/Rulin Shao/ on-the-adversarial-robustness-of-visual-transformer.
Open Datasets Yes Clean Accuracy (CA) stands for the accuracy evaluated on the entire Image Net-1k (Deng et al., 2009) test set... For this experiment we use CIFAR-10 (Krizhevsky et al., 2009)...
Dataset Splits Yes Clean Accuracy (CA) stands for the accuracy evaluated on the entire Image Net-1k (Deng et al., 2009) test set, Robust Accuracy (RA) stands for the accuracy on the adversarial examples generated with 1,000 test samples. For this experiment we use CIFAR-10 (Krizhevsky et al., 2009)...
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models or memory specifications used for running its experiments.
Software Dependencies No The paper mentions 'Foolbox (Rauber et al., 2020)' and refers to 'Pytorch (Paszke et al., 2019)' and the 'timm package' but does not specify exact version numbers for these software dependencies, which is required for reproducibility.
Experiment Setup Yes We train the denoisers using the stability objective for 25 epochs with a noise level of σ = 0.25, learning rate of 10^-5 and a batch size of 64. ... We use a batch size of 128, an initial learning rate of 0.1, an SGD optimizer with momentum 0.9, and the learning rate decays after 15 epochs and 18 epochs respectively with a rate of 0.1. While we use a weight decay of 5 10^-4 for CNNs... we still use 2 10^-4 for Vi T... Each model is trained using only 20 epochs to reduce the cost.