Jacobian Adversarially Regularized Networks for Robustness

Authors: Alvin Chan, Yi Tay, Yew Soon Ong, Jie Fu

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Image classifiers trained with JARN show improved robust accuracy compared to standard models on the MNIST, SVHN and CIFAR-10 datasets, uncovering a new angle to boost robustness without using adversarial training examples.
Researcher Affiliation Academia 1Nanyang Technological University, 2Mila, Polytechnique Montreal
Pseudocode Yes Algorithm 1 details the corresponding pseudo-codes.
Open Source Code Yes Source code available at https://github.com/alvinchangw/JARN
Open Datasets Yes We conduct experiments on three image datasets, MNIST, SVHN and CIFAR-10 to evaluate the adversarial robustness of models trained by JARN. MNIST consists of 60k training and 10k test binary-colored images. SVHN is a 10-class house number image classification dataset with 73257 training and 26032 test images, each of size 32 32 3. CIFAR-10 contains 32 32 3 colored images labeled as 10 classes, with 50k training and 10k test images.
Dataset Splits No The paper specifies training and test set sizes for MNIST, SVHN, and CIFAR-10, but does not explicitly state the use of a separate validation set or its split details.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processors, or memory used for running the experiments.
Software Dependencies No The paper does not specify particular software versions (e.g., Python, PyTorch, TensorFlow, or CUDA versions) required to replicate the experiment.
Experiment Setup Yes For JARN, we use λadv = 1, a discriminator network of 2 CNN layers (64-128 output channels) and update it for every 10 fcls training iterations. For JARN, we use λadv = 5, a discriminator network of 5 CNN layers (16-32-64-128-256 output channels) and update it for every 20 fcls training iterations. For JARN, we use λadv = 1, a discriminator network of 5 CNN layers (32-64-128-256-512 output channels) and update it for every 20 fcls training iterations.