Lower Bounds on Adversarial Robustness from Optimal Transport

Authors: Arjun Nitin Bhagoji, Daniel Cullina, Prateek Mittal

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we use our framework to study the gap between the optimal classification performance possible and that currently achieved by state-of-the-art robustly trained neural networks for datasets of interest, namely, MNIST, Fashion MNIST and CIFAR-10. In this section, we use Theorem 1 to find lower bounds on adversarial robustness for empirical datasets of interest. We also compare these bounds to the performance of robustly trained classifiers
Researcher Affiliation Academia Arjun Nitin Bhagoji Department of Electrical Engineering Princeton University abhagoji@princeton.edu Daniel Cullina , Department of Electrical Engineering Pennsylvania State University cullina@psu.edu Prateek Mittal Department of Electrical Engineering Princeton University pmittal@princeton.edu
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes For reproducibility purposes, our code is available at https://github.com/inspire-group/robustness-via-transport.
Open Datasets Yes We consider the adversarial classification problem on three widely used image datasets, namely MNIST [50], Fashion-MNIST [82] and CIFAR-10 [47]
Dataset Splits No The paper mentions using '2000 images from the training set' but does not specify a training/validation/test split or provide absolute sample counts for each split or reference predefined splits for validation.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments.
Software Dependencies No The paper mentions 'Adam optimizer [43]' and 'Linear Sum Assignment module from Scipy [41]' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes For the MNIST and Fashion MNIST dataset, we compare the lower bound with the performance of a 3-layer Convolutional Neural Network (CNN) that is robustly trained using iterative adversarial training [54] with the Adam optimizer [43] for 12 epochs. For the CIFAR-10 dataset, we use a Res Net-18 [36] trained for 200 epochs... To generate adversarial examples both during the training process and to test robustness, we use Projected Gradient Descent (PGD) with an ℓ2 constraint, random initialization and a minimum of 10 iterations.