A Spectral View of Adversarially Robust Features

Authors: Shivam Garg, Vatsal Sharan, Brian Zhang, Gregory Valiant

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 5, we also test our adversarial features on the downstream task of classification on adversarial images, and obtain positive results. and 5 Experiments 5.1 Image Classification: The MNIST Dataset While the main focus of our work is to improve the conceptual understanding of adversarial robustness, we also perform experiments on the MNIST dataset.
Researcher Affiliation Academia Shivam Garg Vatsal Sharan Brian Hu Zhang Gregory Valiant Stanford University Stanford, CA 94305 {shivamgarg, vsharan, bhz, gvaliant}@stanford.edu
Pseudocode No The paper describes methods in prose and mathematical notation but does not contain a structured pseudocode block or algorithm.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology.
Open Datasets Yes We used a subset of MNIST dataset, which is commonly used in discussions of adversarial examples [Goodfellow et al., 2014, Szegedy et al., 2013, Madry et al., 2017].
Dataset Splits No Our dataset has 11,000 images of hand written digits from zero to nine, of which 10,000 images are used for training, and rest for test. This specifies training and test, but not a validation split.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory used for conducting the experiments.
Software Dependencies No We use Py Torch implementation of Adam [Kingma and Ba, 2014] for optimization with a step size of 0.001. This mentions PyTorch and Adam but no version numbers for either.
Experiment Setup Yes We consider a fully connected neural network with one hidden layer having 200 units, with Re LU non-linearity, and cross-entropy loss. We use Py Torch implementation of Adam [Kingma and Ba, 2014] for optimization with a step size of 0.001. To obtain a robust neural network, we generate adversarial examples using projected gradient descent for each mini-batch, and train our model on these examples. For projected gradient descent, we use a step size of 0.1 for 40 iterations.