Stronger and Faster Wasserstein Adversarial Attacks

Authors: Kaiwen Wu, Allen Wang, Yaoliang Yu

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on MNIST (Le Cun, 1998), CIFAR-10 (Krizhevsky, 2009) and Image Net (Deng et al., 2009). On MNIST and CIFAR-10, we attack two deep networks used by Wong et al. (2019). On Image Net, we attack a 50-layer residual network (He et al., 2016). In Table 3, we compare (a) strength of different attacks by adversarial accuracy, i.e. model accuracy under attacks and (b) the running speed by the average number of dual iterations.
Researcher Affiliation Collaboration 1David R. Cheriton School of Computer Science, University of Waterloo 2Vector Institute. Correspondence to: Kaiwen Wu <kaiwen.wu@uwaterloo.ca>.
Pseudocode Yes Algorithm 1: Dual Projection Input: G, C Rn n, x Rn, δ > 0, l = 0, u > 0 Output: Π Rn n 1 while not converged do 3 Π = argminΠ1=x,Π 0 Π G + λC 2 F 4 if Π, C > δ then l = λ 5 else u = λ
Open Source Code Yes Our implementation is available at https://github.com/watml/fast-wasserstein-adversarial.
Open Datasets Yes We conduct experiments on MNIST (Le Cun, 1998), CIFAR-10 (Krizhevsky, 2009) and Image Net (Deng et al., 2009).
Dataset Splits Yes On Image Net, we attack a 50-layer residual network (He et al., 2016). Image Net experiments are run on the first 100 samples in the validation set.
Hardware Specification Yes We thank NVIDIA Corporation (the data science grant) for donating two Titan V GPUs that enabled in part the computation in this work.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes FW uses a fixed decay schedule ηt = 2 t+1. Step sizes of PGD are tuned in 1, 10 1, 10 2, 10 3 . Some experiments for different step sizes are presented in 6.1.