Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack
Authors: Ruize Gao, Jiongxiao Wang, Kaiwen Zhou, Feng Liu, Binghui Xie, Gang Niu, Bo Han, James Cheng
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Compared with AA, our method achieves comparable performance but only costs 3% of the computational time in extensive experiments. The reliability of our method lies in that we evaluate the quality of adversarial examples using the margin between two targets that can precisely identify the most adversarial example. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, The Chinese University of Hong Kong 2School of Mathematics and Statistics, The University of Melbourne 3RIKEN-AIP 4Department of Computer Science, Hong Kong Baptist University. |
| Pseudocode | Yes | Algorithm 1 MM Attack; Algorithm 2 Adversarial Training of MM attack. |
| Open Source Code | Yes | The code of our MM attack is available at github.com/Sjtubrian/MM-attack. |
| Open Datasets | Yes | We conducted experiments on CIFAR-10, SVHN and CIFAR-100. ... The CIFAR-10 dataset, the SVHN and the CIFAR-100 dataset can be downloaded via Pytorch. |
| Dataset Splits | No | The paper mentions using "the performance of the best checkpoint model (results at epoch 60)" which implies a validation process for model selection, but it does not explicitly provide details about a specific validation dataset split (e.g., percentages or counts) or refer to a standard validation split for the datasets used. |
| Hardware Specification | Yes | We implement all methods on Python 3.7 (Pytorch 1.7.1) with an NVIDIA Ge Force RTX 3090 GPU with AMD Ryzen Threadripper 3960X 24 Core Processor. |
| Software Dependencies | Yes | We implement all methods on Python 3.7 (Pytorch 1.7.1) with an NVIDIA Ge Force RTX 3090 GPU with AMD Ryzen Threadripper 3960X 24 Core Processor. |
| Experiment Setup | Yes | The training setup follows previous works (Madry et al., 2018; Zhang et al., 2019) that all networks are trained for 100 epochs using SGD with 0.9 momentum. The initial learning rate is 0.1 (0.01 for SVHN), and is divided by 10 at epoch 60 and 90, respectively. The weight decay is 0.0002 (0.0035 for SVHN). |