Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

Authors: Pouya Samangouei, Maya Kabkab, Rama Chellappa

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies.
Researcher Affiliation Academia Pouya Samangouei , Maya Kabkab , and Rama Chellappa Department of Electrical and Computer Engineering University of Maryland Institute for Advanced Computer Studies University of Maryland, College Park, MD 20742 {pouya, mayak, rama}@umiacs.umd.edu
Pseudocode No The paper includes figures illustrating the algorithm flow (Figure 1, Figure 2) and mathematical descriptions of gradient descent steps (Appendix B), but no formally labeled 'Pseudocode' or 'Algorithm' block with structured steps.
Open Source Code No The paper states, 'Our implementation is based on Tensor Flow (Abadi et al., 2015) and builds on open-source software: Clever Hans by Papernot et al. (2016a) and improved WGAN training by Gulrajani et al. (2017).' This indicates they used existing open-source software, but there is no explicit statement or link indicating that the code for *their* specific implementation or methodology is released or publicly available.
Open Datasets Yes In our experiments, we use two different image datasets: the MNIST handwritten digits dataset (Le Cun et al., 1998) and the Fashion-MNIST (F-MNIST) clothing articles dataset (Xiao et al., 2017).
Dataset Splits Yes We split the training images into a training set of 50, 000 images and hold-out a validation set containing 10, 000 images.
Hardware Specification Yes We use machines equipped with NVIDIA Ge Force GTX TITAN X GPUs.
Software Dependencies No The paper mentions: 'Our implementation is based on Tensor Flow (Abadi et al., 2015) and builds on open-source software: Clever Hans by Papernot et al. (2016a) and improved WGAN training by Gulrajani et al. (2017).' While it names the software components, it does not provide specific version numbers for TensorFlow, Clever Hans, or the WGAN training library, which is required for reproducibility.
Experiment Setup Yes Defense-GAN has L = 200 and R = 10...We perform the CW attack for 100 iterations of projected GD, with learning rate 10.0, and use c = 100 in equation (4).