Label Noise in Adversarial Training: A Novel Perspective to Study Robust Overfitting
Authors: Chengyu Dong, Liyuan Liu, Jingbo Shang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on different datasets, training methods, neural architectures and robustness evaluation metrics verify the effectiveness of our method. |
| Researcher Affiliation | Collaboration | Chengyu Dong University of California, San Diego cdong@eng.ucsd.edu Liyuan Liu Microsoft Research lucliu@microsoft.com Jingbo Shang University of California, San Diego jshang@eng.ucsd.edu |
| Pseudocode | No | The paper describes its methods in text and mathematical formulations but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We conduct experiments on three datasets including CIFAR-10, CIFAR100 (Krizhevsky, 2009) and Tiny-Image Net (Le & Yang, 2015). |
| Dataset Splits | No | The paper mentions using a 'validation set' and 'training subset of size 5k' but does not provide specific split percentages or sample counts for the train/validation/test sets across all experiments to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions various training methods and models (e.g., PGD training, Res Net-18, Auto Attack) but does not provide specific version numbers for any software dependencies or libraries used. |
| Experiment Setup | Yes | We conduct PGD training on pre-activation Res Net-18 (He et al., 2016) with 10 iterations and perturbation radius 8/255 by default. |