Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification
Authors: Xuwang Yin, Soheil Kolouri, Gustavo K Rohde
ICLR 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide comprehensive evaluation of the above adversarial example detection/classification methods, and demonstrate their competitive performances and compelling properties. Code is available at https://github.com/ xuwangyin/GAT-Generative-Adversarial-Training 1. ... 4 EVALUATION METHODOLOGY ... 5 EXPERIMENTS |
| Researcher Affiliation | Collaboration | Xuwang Yin Department of Electrical and Computer Engineering University of Virginia EMAIL Soheil Kolouri Information and Systems Sciences Laboratory HRL Laboratories, LLC. EMAIL Gustavo K. Rohde Department of Electrical and Computer Engineering University of Virginia EMAIL |
| Pseudocode | No | The paper describes the method using mathematical equations and textual descriptions but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/ xuwangyin/GAT-Generative-Adversarial-Training |
| Open Datasets | Yes | We use 50K samples from the original training set for training and the remaining 10K samples for validation... (referring to MNIST dataset). ... On CIFAR10 we train a single detection model... MNIST and CIFAR10 are well-known public datasets. |
| Dataset Splits | Yes | We use 50K samples from the original training set for training and the remaining 10K samples for validation, and report the test performance based on the checkpoint which has the best validation performance. |
| Hardware Specification | Yes | On our Quadro M6000 24GB GPU (Tensor Flow 1.13.1), the inference speed of the generative classifier is roughly ten times slower than the softmax classifier. |
| Software Dependencies | Yes | On our Quadro M6000 24GB GPU (Tensor Flow 1.13.1), the inference speed of the generative classifier is roughly ten times slower than the softmax classifier. |
| Experiment Setup | Yes | All binary classifiers are trained for 100 epochs, where in each iteration we sample 32 in-class samples as the positive samples, and 32 out-class samples to create adversarial examples which will be used as negative samples. ... Table 5: Training setups for MNIST detection models PGD attack steps, step size (training) |