Towards Understanding the Regularization of Adversarial Robustness on Neural Networks
Authors: Yuxin Wen, Shuai Li, Kui Jia
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We identify quantities from generalization analysis of NNs; with the identifed quantities we empirically fnd that AR is achieved by regulariz ing/biasing NNs towards less confdent solutions... Overall, the core contribution in this work is to show that adversarial robustness (AR) regularizes NNs in a way that hurts its capacity to learn to perform in test. More specifcally: ... Our empirical analysis tells that AR effectively regularizes NNs to reduce the GE gaps. ...Empirical studies on regularization of adversarial robustness (Section 4 title). |
| Researcher Affiliation | Academia | 1School of Electronic and Information En gineering, South China University of Technology, Guangzhou, Guangdong 510640, China 2Pazhou Lab, Guangzhou, 510335, China. Correspondence to: Shuai Li <lishuai918@gmail.com>, Yuxin Wen <wen.yuxin@mail.scut.edu.cn>, Kui Jia <kui jia@scut.edu.cn>. |
| Pseudocode | No | No explicitly labeled pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | Yes | Our experiments are conducted on CIFAR10, CIFAR100, and Tiny-Image Net (Image Net, 2018) that represent learn ing tasks of increased diffculties. Tiny imagenet, 2018. URL https://tiny-imagenet.herokuapp.com/. |
| Dataset Splits | No | The paper mentions using training and test sets (e.g., 'test set of CIFAR10', 'training losses', 'test losses') but does not provide specific details on the dataset splits (e.g., exact percentages or sample counts for training, validation, and test sets). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., libraries, frameworks, or programming languages with their specific versions) used for the experiments. |
| Experiment Setup | No | The paper mentions using adversarial training with increasing AR strength and specific network architectures (ResNet, Wide ResNet), and refers to appendix B.1 for details on the adversarial training technique, but does not provide concrete hyperparameters for the main network training such as learning rate, batch size, number of epochs, or optimizer settings. |