Make Some Noise: Reliable and Efficient Single-Step Adversarial Training
Authors: Pau de Jorge Aranda, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip Torr, Gregory Rogez, Puneet Dokania
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical analyses on a large suite of experiments show that N-FGSM is able to match or surpass the performance of previous state-of-the-art Grad Align, while achieving 3 speed-up. Code can be found in https://github.com/pdejorge/N-FGSM |
| Researcher Affiliation | Collaboration | Pau de Jorge University of Oxford NAVER LABS Europe Adel Bibi University of Oxford Riccardo Volpi NAVER LABS Europe Amartya Sanyal ETH Zürich ETH AI Center Philip H. S. Torr University of Oxford Grégory Rogez NAVER LABS Europe Puneet K. Dokania University of Oxford Five AI Ltd. |
| Pseudocode | Yes | Algorithm 1 N-FGSM adversarial training 1: Inputs: epochs T, batches M, radius ϵ, step-size α (default: ϵ), noise magnitude k (default: 2ϵ). |
| Open Source Code | Yes | Code can be found in https://github.com/pdejorge/N-FGSM |
| Open Datasets | Yes | We evaluate adversarial robustness on CIFAR-10/100 [16] and SVHN [21] with PGD-50-10 attacks, using both Preact Res Net18 [13] and Wide Res Net28-10 [36]. Experiments on Imagenet. We present results on the Imagenet dataset [17] in Table 1. |
| Dataset Splits | Yes | We evaluate adversarial robustness on CIFAR-10/100 [16] and SVHN [21] with PGD-50-10 attacks, using both Preact Res Net18 [13] and Wide Res Net28-10 [36]. We train on CIFAR-10/100 for 30 epochs and on SVHN for 15 epochs with a cyclic learning rate. |
| Hardware Specification | Yes | All experiments have been run on a single NVIDIA P100 GPU |
| Software Dependencies | No | The paper mentions software components implicitly through the description of methods (e.g., training neural networks), but it does not specify explicit version numbers for any libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | Algorithm 1 N-FGSM adversarial training 1: Inputs: epochs T, batches M, radius ϵ, step-size α (default: ϵ), noise magnitude k (default: 2ϵ). We train on CIFAR-10/100 for 30 epochs and on SVHN for 15 epochs with a cyclic learning rate. Regarding the noise hyperparameter k, we find that k = 2ϵ works in all but one SVHN experiment (ϵ = 12, in which we set k = 3ϵ). |