Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training

Authors: Chen Liu, Zhichao Huang, Mathieu Salzmann, Tong Zhang, Sabine Süsstrunk

JMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically study how easy and hard instances impact the performance of adversarial training... We conduct theoretical analyses on both linear and nonlinear models. ...Our empirical and theoretical analyses indicate that avoiding fitting the hard training instances can mitigate adversarial overfitting.
Researcher Affiliation Academia 1 Department of Computer Science, City University of Hong Kong... 2 Department of Mathematics, Hong Kong University of Science and Technology... 3 School of Computer and Communication Sciences, Ecole Polytechnique F ed erale de Lausanne... 4 Siebel School of Computing and Data Science, University of Illinois Urbana-Champaign
Pseudocode Yes Algorithm 1 One epoch of the accelerated adversarial training.
Open Source Code Yes The code to reproduce the results of this paper is publicly available on Github1. 1. https://github.com/IVRL/Robust Overfit-Hard Instance.git
Open Datasets Yes Figure 1: Some examples of the easiest and the hardest instances in CIFAR10 (Krizhevsky et al., 2009) and SVHN (Netzer et al., 2011) datasets... 6. Data available for download on https://www.cs.toronto.edu/ kriz/cifar.html. MIT license. Free to use. 7. Data available for download on http://ufldl.stanford.edu/housenumbers/. Free for non-commercial use.
Dataset Splits No The paper mentions using 'CIFAR10' and 'SVHN' datasets, which typically have standard splits. It also details the creation of subsets (e.g., '10000 easiest, random or hardest instances') and groups ('10 non-overlapping groups {Gi}9 i=0') from the training data, and how additional data is combined ('half of its instances are sampled from the original training set and the other half are sampled from the additional data'). However, it does not explicitly provide the standard train/test/validation split percentages or exact counts for the main datasets used in general experiments.
Hardware Specification Yes We run the experiments on a machine with 4 NVIDIA TITAN XP GPUs.
Software Dependencies No The paper mentions software tools like 'Auto Attack' and algorithms like 'stochastic gradient descent (SGD)', but it does not specify version numbers for any libraries, programming languages, or other software components used for implementation.
Experiment Setup Yes Unless otherwise mentioned, we use the general experimental settings in Appendix C.1. For PGD adversarial training, the step size is 2/255 for CIFAR10 and 0.005 for SVHN; PGD is run for 10 iterations for both datasets. ... the momentum factor is 0.9, the learning rate starts with 0.1 and is divided by 10 in the 1/2 and 3/4 of the whole training duration. The size of the mini-batch is always 128.