Distributionally Adversarial Attack

Authors: Tianhang Zheng, Changyou Chen, Kui Ren2253-2260

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental DAA is evaluated by attacking stateof-the-art defense models, including the adversarially-trained models provided by MIT Madry Lab. Notably, DAA ranks the first place on Madry Lab s white-box leaderboards, reducing the accuracy of their secret MNIST model to 88.56% (with l perturbations of ϵ = 0.3) and the accuracy of their secret CIFAR model to 44.71% (with l perturbations of ϵ = 8.0).
Researcher Affiliation Academia 1State University of New York at Buffalo 2Zhejiang University {tzheng4, changyou, kuiren}@buffalo.edu
Pseudocode Yes Algorithm 1 DAA algorithm (untargeted attack)
Open Source Code Yes Code for the experiments is released on https://github.com/ tianzheng4/Distributionally-Adversarial-Attack.
Open Datasets Yes The proposed DAA together with state-of-the-art methods, PGD and Momentum PGD, are evaluated and compared on four standard datasets, including MINST, Fashion MNIST (FMNIST), CIFAR10 and Image Net. For MINST, the attack target is the state-ofthe-art PGD-adversarially-trained MINST model provided by MIT Madry Lab (Madry et al. 2017). ... For Image Net, we adopt the target model in (Kurakin, Goodfellow, and Bengio 2016c).
Dataset Splits No The paper uses standard datasets (MNIST, FMNIST, CIFAR10, ImageNet) but does not explicitly provide the train/validation/test split percentages or sample counts for its own experiments, nor does it cite a specific predefined split used for all datasets.
Hardware Specification Yes All experiments are conducted on a single Titan V GPU under a white-box setting
Software Dependencies No The paper mentions using an RBF kernel and refers to hyperparameters, but does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For all the methods related to kernel functions, an RBF kernel K(x, x ) = exp( x x 2 2/h) is adopted. The bandwidth is set as h = med2/ log M... The minibatch size (number of particles) is set to 100 200 for computational feasibility. Our specific settings on hyper-parameters c and 2γc 1+c can be found in our Github link... Algorithm 1 DAA algorithm (untargeted attack) Require: ... minibatch size M; step size ϵ; predefined final perturbation size α; total iterations L; rounds R; hyperparameter c or 2γc 1+c;