Robust Local Features for Improving the Generalization of Adversarial Training

Authors: Chuanbiao Song, Kun He, Jiadong Lin, Liwei Wang, John E. Hopcroft

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on STL-10, CIFAR-10 and CIFAR-100 show that RLFAT significantly improves both the adversarially robust generalization and the standard generalization of adversarial training.
Researcher Affiliation Academia Chuanbiao Song & Kun He & Jiadong Lin School of Computer Science and Technology Huazhong University of Science and Technology Wuhan, 430074, China {cbsong,brooklet60,jdlin}@hust.edu.cn Liwei Wang School of Electronics Engineering and Computer Sciences, Peking University Peking, China wanglw@cis.pku.edu.cn John E. Hopcroft Department of Computer Science Cornell University, NY 14853, USA jeh@cs.cornell.edu
Pseudocode Yes Algorithm 1 Robust Local Features for Adversarial Training (RLFAT).
Open Source Code Yes Codes are available online1. 1https://github.com/JHL-HUST/RLFAT
Open Datasets Yes Datasets. We compare the proposed methods with the baselines on widely used benchmark datasets, namely CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton, 2009). Since adversarially robust generalization becomes increasingly hard for high dimensional data and little training data (Schmidt et al., 2018), we also consider one challenging dataset: STL-10 (Coates et al.), which contains 5, 000 training images, with 96 96 pixels per image.
Dataset Splits No The paper mentions training data (e.g., "5,000 training images" for STL-10) but does not explicitly state validation dataset splits or specific numbers for validation sets.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using 'Adam optimizer' but does not specify version numbers for any software libraries, frameworks, or programming languages.
Experiment Setup Yes For all training jobs, we use the Adam optimizer with a learning rate of 0.001 and a batch size of 32. For CIFAR-10 and CIFAR-100, we run 79,800 steps for training. For STL-10, we run 29,700 steps for training. For STL-10 and CIFAR-100, the adversarial examples are generated with step size 0.0075, 7 iterations, and ϵ = 0.03. For CIFAR-10, the adversarial examples are generated with step size 0.0075, 10 iterations, and ϵ = 0.03.