Wasserstein Adversarial Examples via Projected Sinkhorn Iterations

Authors: Eric Wong, Frank Schmidt, Zico Kolter

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The resulting algorithm can successfully attack image classification models, bringing traditional CIFAR10 models down to 3% accuracy within a Wasserstein ball with radius 0.1 (i.e., moving 10% of the image mass 1 pixel), and we demonstrate that PGD-based adversarial training can improve this adversarial accuracy to 76%.
Researcher Affiliation Collaboration 1Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA 2Bosch Center for Artificial Intelligence, Renningen, Germany 3Computer Science Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA 4Bosch Center for Artificial Intelligence, Pittsburgh, Pennsylvania, USA.
Pseudocode Yes Algorithm 1 An epoch of adversarial training for a loss function ℓ, classifier fθ with parameters θ, and step size parameter α for some ball B.
Open Source Code Yes and code for all experiments in the paper is available at https://github.com/ locuslab/projected sinkhorn.
Open Datasets Yes For MNIST we used the convolutional Re LU architecture used in Wong & Kolter (2018)...
Dataset Splits No The paper uses standard datasets (MNIST, CIFAR10) which typically have predefined splits, but it does not explicitly state the training, validation, and test splits (e.g., percentages or number of samples) within the paper itself.
Hardware Specification Yes taking about 0.02 seconds per iteration on a Titan X for minibatches of size 100.
Software Dependencies No The paper mentions using 'convolutional ReLU architecture' and 'ResNet18 architecture', but does not specify software dependencies with version numbers (e.g., TensorFlow X.Y, PyTorch A.B).
Experiment Setup Yes For all experiments in this section, we focused on using 5 5 local transport plans for the Wasserstein ball, and used an entropy regularization constant of 1000 for MNIST and 3000 for CIFAR10.