Identity-Disentangled Adversarial Augmentation for Self-supervised Learning
Authors: Kaiwen Yang, Tianyi Zhou, Xinmei Tian, Dacheng Tao
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the improvements that IDAA as a data augmentation method brings to several popular methods in (1) self-supervised learning and (2) semi-supervised learning on standard benchmarks such as CIFAR (Krizhevsky et al., 2009) and Image Net (Deng et al., 2009). ... In addition, we conduct a thorough sensitivity study of IDAA by changing (1) batch sizes; (2) network architectures; (3) training epochs; (4) regularization weight β in the VAE objective; (5) dimensions of VAE s bottleneck; and (6) adversarial attack strength ϵ. |
| Researcher Affiliation | Collaboration | 1University of Science and Technology of China, Hefei, China 2University of Washington, Seattle, USA 3University of Maryland, College Park, USA 4Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China 5JD Explore Academy, Beijing, China. |
| Pseudocode | No | The paper visually illustrates the architecture and pipeline in Figure 2 and describes the process in text, but it does not present a formal pseudocode or algorithm block. |
| Open Source Code | Yes | Code is available at https://github.com/kai-wen-yang/IDAA. |
| Open Datasets | Yes | In this section, we evaluate the improvements that IDAA as a data augmentation method brings to several popular methods in (1) self-supervised learning and (2) semi-supervised learning on standard benchmarks such as CIFAR (Krizhevsky et al., 2009) and Image Net (Deng et al., 2009). |
| Dataset Splits | Yes | The training/test splitting of mini Image Net follows (Ebrahimi et al., 2020). ... Image Net, which contains 1.28M images in the training set and 50K images in the validation set from 1000 classes. |
| Hardware Specification | Yes | All CIFAR (Image Net) experiments are conducted on NVIDIA V100 (A100) GPU. |
| Software Dependencies | Yes | All code are implemented with Pytorch (Paszke et al., 2019). |
| Experiment Setup | Yes | The pre-trained VAE uses a standard VAE architecture (Kingma & Welling, 2013) with 512 (3072) bottleneck dimension for CIFAR (Image Net). Default β in Eq. (6) is set to be 0.1 and default ϵ in Eq. (9) is set to be 0.15. ... We train a Res Net-18 for 300 (100) epochs and linear evaluation model for (1000) 200 epochs with batch size 256 (128) for CIFAR (mini Image Net). |