Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Improving the Generalization of Adversarial Training with Domain Adaptation

Authors: Chuanbiao Song, Kun He, Liwei Wang, John E. Hopcroft

ICLR 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluations on Fashion-MNIST, SVHN, CIFAR-10 and CIFAR-100 demonstrate that ATDA can greatly improve the generalization of adversarial training and the smoothness of the learned models, and outperforms state-of-the-art methods on standard benchmark datasets.
Researcher Affiliation Academia Chuanbiao Song Department of Computer Science Huazhong University of Science and Technology Wuhan 430074, China EMAIL Kun He Department of Computer Science Huazhong University of Science and Technology Wuhan 430074, China EMAIL Liwei Wang Department of Machine Intelligence Peking University EMAIL John E. Hopcroft Department of Computer Science Cornell University Ithaca 14850, NY, USA EMAIL
Pseudocode Yes Algorithm 1 Adversarial training with domain adaptation on network f(x) : Rd Rk.
Open Source Code Yes Code for these experiments is available at https: //github.com/JHL-HUST/ATDA.
Open Datasets Yes We consider four popular datasets, namely Fashion-MNIST (Xiao et al., 2017), SVHN (Netzer et al., 2011), CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton, 2009).
Dataset Splits No The paper mentions training and testing data but does not specify explicit train/validation/test splits, percentages, or absolute counts for dataset partitioning.
Hardware Specification Yes All experiments are implemented on a single Titan X GPU.
Software Dependencies No The paper mentions optimizers (Adam) and components like ELU and Group Normalization, but it does not specify software names with version numbers (e.g., Python, TensorFlow, PyTorch versions) for reproducibility.
Experiment Setup Yes For all experiments, we set the hyper-parameter λ in Eq. (12) to 1/3 and the hyper-parameter α in Eq. (10) to 0.1. ... In the training phase, we use Adam optimizer with a learning rate of 0.001 and set the batch size to 64. ... For all adversarial training methods, the magnitude of perturbations is 0.1 in ℓ norm.