Composite Adversarial Attacks

Authors: Xiaofeng Mao, Yuefeng Chen, Shuhui Wang, Hang Su, Yuan He, Hui Xue8884-8892

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental result shows CAA beats 10 top attackers on 11 diverse defenses with less elapsed time (6× faster than Auto Attack), and achieves the new state-of-the-art on l , l2 and unrestricted adversarial attacks.
Researcher Affiliation Collaboration Xiaofeng Mao1, Yuefeng Chen1, Shuhui Wang2*, Hang Su3, Yuan He1, Hui Xue 1 1 Alibaba Group, 2 Inst. of Comput. Tech., CAS 3 Tsinghua University
Pseudocode Yes Algorithm 1 Attack policy search using NSGA-II
Open Source Code No The paper mentions borrowing code from "open source attack toolbox, such as Foolbox (Rauber, Brendel, and Bethge 2017) and Advertorch (Ding, Wang, and Jin 2019)" but does not state that the authors are providing their own source code for CAA.
Open Datasets Yes We run l and l2 attack experiments on CIFAR-10 and Image Net (Deng et al. 2009) datasets. We perform unrestricted attack on Bird&Bicycle (Brown et al. 2018) datasets.
Dataset Splits Yes For CIFAR-10, we search for the best policies on a small subset, which contains 4,000 examples randomly chosen from the train set. Total 10,000 examples in test set are used for the evaluation of the searched policy. For Image Net, as the whole validation set is large, we randomly select 1000 images for policy search and 1000 images for evaluation from training and testing database respectively.
Hardware Specification No The paper mentions "3 GPU/d" in Table 5 when comparing search methods, indicating GPU usage, but it does not specify the model or type of GPU, CPU, or any other specific hardware used for the experiments.
Software Dependencies No The paper mentions using "Foolbox (Rauber, Brendel, and Bethge 2017) and Advertorch (Ding, Wang, and Jin 2019)" toolboxes, but does not specify version numbers for these or any other software dependencies crucial for reproduction.
Experiment Setup Yes For each attack operation, there are two hyper-parameters, i.e., magnitude ϵ and iteration steps t. ... To limit the search scope of two hyper-parameters, two intervals are given: ϵ [0, ϵmax] and t [0, tmax], where ϵmax and tmax are the max magnitude and iteration of each attack predefined by users. ... We discretize the range of magnitudes ϵ and steps t into 8 values (uniform spacing) so that we can simplify the composite adversarial attacks search as a discrete optimization problem.