Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization

Authors: Runqi Lin, Chaojian Yu, Tongliang Liu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our method can effectively eliminate CO and further boost adversarial robustness with negligible additional computational overhead. In this section, we provide a comprehensive evaluation to verify the effectiveness of AAER, including experiment settings (Section 4.1), performance evaluation (Section 4.2), ablation studies (Section 4.3) and time complexity study (Section 4.4).
Researcher Affiliation Academia Runqi Lin Chaojian Yu Tongliang Liu Sydney AI Centre, The University of Sydney {rlin0511, chyu8051, tongliang.liu}@sydney.edu.au
Pseudocode Yes Algorithm 1 Abnormal Adversarial Examples Regularization (AAER)
Open Source Code Yes Our implementation can be found at https://github.com/tmllab/2023_NeurIPS_AAER.
Open Datasets Yes We evaluate our method on several benchmark datasets, including Cifar-10/100 [22], SVHN [28], Tiny-Image Net [28] and Imagenet-100 [7].
Dataset Splits No No explicit statement specifying training/validation/test split percentages or sample counts, or direct citations to specific split methodologies was found for reproducibility.
Hardware Specification Yes Table 4. CIFAR10 training time on a single NVIDIA RTX 4090 GPU using Preact Res Net-18.
Software Dependencies No No explicit listing of software dependencies with specific version numbers (e.g., Python 3.8, PyTorch 1.9) was found.
Experiment Setup Yes In this work, we use the SGD optimizer with a momentum of 0.9, weight decay of 5 Ɨ 10āˆ’4 and Lāˆž as the threat model. For the learning rate schedule, we use the cyclical learning rate schedule [32] with 30 epochs, which reaches its maximum learning rate (0.2) when half of the epochs (15) are passed. The hyperparameter settings for Cifar-10/100 are summarized in the Table 1.