Taxonomy Driven Fast Adversarial Training

Authors: Kun Tong, Chengze Jiang, Jie Gui, Yuan Cao

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Analysis and comparison experiments are performed and evaluated on the CIFAR-10 (Krizhevsky, Hinton et al. 2009), CIFAR-100 (Krizhevsky, Hinton et al. 2009), Tiny Image Net (Le and Yang 2015), and Image Net-100 (Deng et al. 2009) datasets, which are standard datasets for AT. We adopt Res Net-18 as the backbone to perform all experiments.
Researcher Affiliation Academia Kun Tong1, Chengze Jiang1, Jie Gui*1,2,3, Yuan Cao4 1 Southeast University, Nanjing, China 2 Engineering Research Center of Blockchain Application, Supervision And Management (Southeast University), Ministry of Education, China 3 Purple Mountain Laboratories, China 4 Ocean University of China, China
Pseudocode Yes The implementation of our proposed method is outlined in Algorithm 1 and presented in detail as follows.
Open Source Code Yes Code is available at https://github.com/bookman233/TDAT.
Open Datasets Yes Analysis and comparison experiments are performed and evaluated on the CIFAR-10 (Krizhevsky, Hinton et al. 2009), CIFAR-100 (Krizhevsky, Hinton et al. 2009), Tiny Image Net (Le and Yang 2015), and Image Net-100 (Deng et al. 2009) datasets, which are standard datasets for AT.
Dataset Splits No The paper mentions using training and test datasets but does not explicitly provide the split percentages or methodology for training, validation, and test sets. It implies standard splits for the mentioned public datasets but doesn't state them.
Hardware Specification No The paper thanks a computing center for facility support but does not specify any particular hardware components like CPU models, GPU models, or memory sizes used for the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python version, library versions like PyTorch or TensorFlow).
Experiment Setup Yes For hyperparameters of CIFAR-10, CIFAR-100, Tiny Image Net, and Image Net-100, relaxation factor γmin is set to 0.15, 0.05, 0.025, and 0.05, respectively. They are dependent on the number of classes in the corresponding dataset. Momentum factor is set to 0.75 for all experiments. For the single-step attack methods, the step size is set to 8/255, while it is set to 2/255 against the multi-step attack methods.