Improving Adversarial Robustness Requires Revisiting Misclassified Examples
Authors: Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, Quanquan Gu
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that MART and its variant could significantly improve the state-of-the-art adversarial robustness. |
| Researcher Affiliation | Collaboration | 1Shanghai Jiao Tong University 2University of California, Los Angles 3JD.com 4The University of Melbourne |
| Pseudocode | Yes | Algorithm 1 Misclassification Aware adve Rsarial Training (MART) |
| Open Source Code | No | The paper does not contain an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | CIFAR-10 (Krizhevsky & Hinton, 2009) |
| Dataset Splits | No | The paper refers to |
| Hardware Specification | No | Part of the experiments were done on JD AI Platform Neu Hub |
| Software Dependencies | No | The paper mentions software components like |
| Experiment Setup | Yes | All the models are trained using SGD with momentum 0.9, weight decay 2 10 4 and an initial learning rate of 0.1, which is divided by 10 at the 75-th and 90-th epoch. All natural images are normalized into [0, 1], and simple data augmentations including 4-pixel padding with 32 32 random crop and random horizontal flip. The maximum perturbation ϵ = 8/255 and parameter λ = 6. The training attack is PGD10 with random start and step size ϵ/4, while the test attack is PGD20 with random start and step size ϵ/10. |