Exploring the Landscape of Spatial Robustness

Authors: Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, Aleksander Madry

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform extensive experiments that provide a fine-grained understanding of rotation / translation robustness on a wide spectrum of datasets and training regimes.
Researcher Affiliation Academia 1EECS, MIT, Massachusetts, USA. Correspondence to: Logan Engstrom <engstrom@mit.edu>, Brandon Tran <btran115@mit.edu>, Dimitris Tsipras <tsipras@mit.edu>, Ludwig Schmidt <ludwigs@mit.edu>, Aleksander M adry <madry@mit.edu>.
Pseudocode No No pseudocode or algorithm blocks found.
Open Source Code No No explicit statement or link providing access to the authors' own open-source code for the methodology described in the paper.
Open Datasets Yes We evaluate standard image classifiers for the MNIST (Le Cun et al., 1998), CIFAR10 (Krizhevsky & Hinton, 2009) and Image Net (Russakovsky et al., 2015) datasets.
Dataset Splits No The paper mentions training on MNIST, CIFAR10, and ImageNet datasets, and evaluates on their test sets, but does not provide explicit training/validation/test dataset splits (e.g., percentages or sample counts) used in their experiments.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments are provided in the paper.
Software Dependencies No The paper mentions using TensorFlow and Tensorpack, but does not specify version numbers for these or other software dependencies.
Experiment Setup Yes For grid search attacks, we consider 5 values per translation direction and 31 values for rotations, equally spaced. For first-order attacks, we use 200 steps of projected gradient descent of step size 0.01 times the parameter range.