Training GANs with Stronger Augmentations via Contrastive Discriminator

Authors: Jongheon Jeong, Jinwoo Shin

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results show that GANs with Contra D consistently improve FID and IS compared to other recent techniques incorporating data augmentations
Researcher Affiliation Academia 1School of Electrical Engineering 2Graduate School of AI Korea Advanced Institute of Science and Technology (KAIST) Daejeon 34141, South Korea {jongheonj,jinwoos}@kaist.ac.kr
Pseudocode Yes Algorithm 1 in Appendix A describes a concrete training procedure of GANs with Contra D using Adam optimizer (Kingma & Ba, 2014).
Open Source Code Yes Code is available at https://github.com/jh-jeong/Contra D.
Open Datasets Yes We consider a variety of datasets including CIFAR-10/100 (Krizhevsky, 2009), Celeb A-HQ-128 (Lee et al., 2020), AFHQ (Choi et al., 2020) and Image Net (Russakovsky et al., 2015) in our experiments
Dataset Splits Yes CIFAR-10 and CIFAR-100 (Krizhevsky, 2009) consist of 60K images of size 32 32 in 10 and 100 classes, respectively, 50K for training and 10K for testing.
Hardware Specification No The paper does not mention any specific CPU or GPU models, or detailed specifications of the hardware used for experiments.
Software Dependencies No All the models are implemented in Py Torch (Paszke et al., 2019) framework.
Experiment Setup Yes We provide the detailed specification on the experimental setups, e.g., architectures, training configurations and hyperparameters in Appendix F.