Adversarial Transformations for Semi-Supervised Learning

Authors: Teppei Suzuki, Ikuro Sato5916-5923

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In experiments, we show that RAT significantly improves classification performance on CIFAR-10 and SVHN compared to existing regularization methods under standard semi-supervised image classification settings.
Researcher Affiliation Industry Teppei Suzuki, Ikuro Sato DENSO IT LABORATORY, INC. 2-15-1 Shibuya, Shibuya-ku Tokyo, Japan {tsuzuki, isato}@d-itlab.co.jp
Pseudocode Yes We show the pseudocode of the generation process of φT -adv in Algorithm 1.
Open Source Code No The paper does not provide explicit statements about the availability of its source code, nor does it link to a code repository.
Open Datasets Yes We use the CIFAR-10 (Krizhevsky and Hinton 2009) and SVHN (Netzer et al. 2011) datasets for evaluation.
Dataset Splits Yes CIFAR-10 has 50,000 training data and 10,000 test data, and we split training data into a train/validation set, 45,000 data for training and 5,000 data for validation. SVHN has 73,257 data for training and 26,032 data for testing. We also split training data into 65,931 data for training and 7,326 data for validation.
Hardware Specification No The paper does not provide specific details regarding the hardware used for running experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions using Py-Torch for implementation but does not specify a version number for Py-Torch or any other software dependencies.
Experiment Setup Yes All hyperparameters for SSL algorithms are adopted the same as in Oliver et al. (2018) except that we do not use L1 and L2 regularization. For all experiments, we used the same Wide Res Net architecture, depth 28 and width 2 (Zagoruyko and Komodakis 2016)... We first seek good ϵ for each transformation with a grid search on CIFAR-10 with 4,000 labeled data... Other parameters such as ξ, λ, and parameters for optimization are the same as VAT suggested in (Oliver et al. 2018). We summarize the parameters in Table 2. We ramp up ϵ for 400,000 iterations.