GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification

Authors: Xuwang Yin, Soheil Kolouri, Gustavo K Rohde

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide comprehensive evaluation of the above adversarial example detection/classification methods, and demonstrate their competitive performances and compelling properties. Code is available at https://github.com/ xuwangyin/GAT-Generative-Adversarial-Training 1. ... 4 EVALUATION METHODOLOGY ... 5 EXPERIMENTS
Researcher Affiliation Collaboration Xuwang Yin Department of Electrical and Computer Engineering University of Virginia xy4cm@virginia.edu Soheil Kolouri Information and Systems Sciences Laboratory HRL Laboratories, LLC. skolouri@hrl.com Gustavo K. Rohde Department of Electrical and Computer Engineering University of Virginia gustavo@virginia.edu
Pseudocode No The paper describes the method using mathematical equations and textual descriptions but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/ xuwangyin/GAT-Generative-Adversarial-Training
Open Datasets Yes We use 50K samples from the original training set for training and the remaining 10K samples for validation... (referring to MNIST dataset). ... On CIFAR10 we train a single detection model... MNIST and CIFAR10 are well-known public datasets.
Dataset Splits Yes We use 50K samples from the original training set for training and the remaining 10K samples for validation, and report the test performance based on the checkpoint which has the best validation performance.
Hardware Specification Yes On our Quadro M6000 24GB GPU (Tensor Flow 1.13.1), the inference speed of the generative classifier is roughly ten times slower than the softmax classifier.
Software Dependencies Yes On our Quadro M6000 24GB GPU (Tensor Flow 1.13.1), the inference speed of the generative classifier is roughly ten times slower than the softmax classifier.
Experiment Setup Yes All binary classifiers are trained for 100 epochs, where in each iteration we sample 32 in-class samples as the positive samples, and 32 out-class samples to create adversarial examples which will be used as negative samples. ... Table 5: Training setups for MNIST detection models PGD attack steps, step size (training)