Can Adversarial Training Be Manipulated By Non-Robust Features?
Authors: Lue Tao, Lei Feng, Hongxin Wei, Jinfeng Yi, Sheng-Jun Huang, Songcan Chen
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, comprehensive experiments demonstrate that stability attacks are harmful on benchmark datasets, and thus the adaptive defense is necessary to maintain robustness. |
| Researcher Affiliation | Collaboration | 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China 2Chongqing University, Chongqing, China 3RIKEN Center for Advanced Intelligence Project, Japan 4Nanyang Technological University, Singapore 5JD AI Research, Beijing, China 6MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing University of Aeronautics and Astronautics, Nanjing, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/TLMichael/Hypocritical-Perturbation. |
| Open Datasets | Yes | We evaluate our methods on four benchmark datasets: CIFAR-10 [33], SVHN [43], CIFAR-100 [33], and Tiny-ImageNet [34]. |
| Dataset Splits | No | The paper mentions using benchmark datasets but does not provide specific dataset split information (exact percentages, sample counts, or explicit citations to predefined splits) needed to reproduce the data partitioning. |
| Hardware Specification | Yes | All experiments are run on a single NVIDIA GeForce RTX 3090 GPU. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | More details on experimental settings are provided in Appendix D. [...] For CIFAR-10, SVHN, and CIFAR-100, all models are trained for 100 epochs, with batch size 128. The learning rate is initialized as 0.1, and divided by 10 at 50 and 75 epochs. We use SGD with momentum 0.9 and weight decay 5e-4. |