Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Modeling Adversarial Noise for Adversarial Training
Authors: Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu
ICML 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4. Experiments In this section, we first introduce the experiment setup in Section 4.1. Then, we evaluate the effectiveness of our defense method against representative and commonly used Lā norm and L2-norm adversarial attacks in Section 4.2. In addition, we conduct ablation studies in Section 4.3. |
| Researcher Affiliation | Academia | 1ISN Lab, School of Telecommunications Engineering, Xidian University, EMAIL, EMAIL 2TML Lab, Sydney AI Centre, The University of Sydney 3Department of Computer Science, Hong Kong Baptist University, EMAIL. |
| Pseudocode | Yes | Algorithm 1 Training the defense model based on Modeling Adversarial Noise (MAN). |
| Open Source Code | Yes | The code is available at https://github.com/dwDavidxd/MAN. |
| Open Datasets | Yes | Datasets. We verify the effective of our defense method on two popular benchmark datasets, i.e., CIFAR-10 (Krizhevsky et al., 2009) and Tiny-Image Net (Wu et al., 2017). |
| Dataset Splits | Yes | CIFAR-10 has 10 classes of images including 50,000 training images and 10,000 test images. Tiny-Image Net has 200 classes of images including 100,000 training images, 10,000 validation images and 10,000 test images. |
| Hardware Specification | No | No specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments are mentioned. The text only refers to model architectures like ResNet-18 and VggNet-19. |
| Software Dependencies | No | The paper mentions training models using SGD, but does not provide specific software names with version numbers for libraries, frameworks, or languages used for implementation. |
| Experiment Setup | Yes | For all baselines and our defense method, we use the Lā-norm non-target PGD-10 (i.e., PGD with iteration number of 10) with random start and step size ϵ/4 to craft adversarial training data. The perturbation budget ϵ is set to 8/255 for both CIFAR-10 and Tiny-Image Net. All the defense models are trained using SGD with momentum 0.9 and an initial learning rate of 0.1. The weight decay is 2 Ć 10ā4 for CIFAR-10, and is 5 Ć 10ā4 for Tiny-Image Net. The batch-size is set as 1024 to reduce time cost. The epoch number is set to 100. The learning rate is divided by 10 at the 75-th and 90-th epoch. |