Learning to Generate Noise for Multi-Attack Robustness
Authors: Divyam Madaan, Jinwoo Shin, Sung Ju Hwang
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We extensively validate the robustness and computational efficiency of our proposed method by evaluating it on stateof-the-art attack methods and comparing it against existing state-of-the-art single and multi-perturbation adversarial defense methods on multiple benchmark datasets (CIFAR-10, SVHN, and Tiny-Image Net dataset). The experimental results show that our method obtains significantly superior performance over all the baseline methods trained with multiple adversarial perturbations, generalizes to diverse perturbations, and substantially reduces the computational cost incurred by training with multiple adversarial perturbations. |
| Researcher Affiliation | Collaboration | 1School of Computing, KAIST, South Korea 2School of Electrical Engineering, KAIST, South Korea 3Graduate School of AI, KAIST, South Korea 4AITRICS, South Korea. |
| Pseudocode | Yes | Algorithm 1 Algorithm for MNG-AC |
| Open Source Code | Yes | We release our code with the pre-trained models for reproducing all the experiments at https://github.com/ divyam3897/MNG_AC. |
| Open Datasets | Yes | Datasets. We evaluate on multiple benchmark datasets: 1. CIFAR-10. This dataset (Krizhevsky, 2012)... 2. SVHN. This dataset (Netzer et al., 2011)... 3. Tiny-Image Net. This dataset3 is a subset of Image Net (Russakovsky et al., 2015)... |
| Dataset Splits | Yes | 1. CIFAR-10. This dataset (Krizhevsky, 2012) contains 60,000 images with 5,000 images for training and 1,000 images for test for each class. ... 2. SVHN. This dataset (Netzer et al., 2011) contains 73257 training and 26032 testing images of digits and numbers in natural scene images containing ten-digit classes. ... 3. Tiny-Image Net. This dataset3 is a subset of Image Net (Russakovsky et al., 2015) dataset, consisting of 500, 50, and 50 images for training, validation, and test dataset, respectively. |
| Hardware Specification | Yes | By a factor of four on a single machine with four Ge Force RTX 2080Ti on CIFAR-10 and SVHN dataset using Wide Res Net 2810 (Zagoruyko & Komodakis, 2016) architecture. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the experiments. While PyTorch is mentioned in the bibliography, no version is specified for its use in the experimental setup. |
| Experiment Setup | Yes | The total loss function Ltotal for the classifier consists exclusively of two terms: SAT classification loss and an adversarial consistency loss: ... where B is the batch-size, β is the hyper-parameter determining the strength of the AC loss denoted by Lac... More specifically, for a learning rate α... We provide a detailed description of the training and evaluation setup in Appendix A. |