Conditional GANs with Auxiliary Discriminative Classifier

Authors: Liang Hou, Qi Cao, Huawei Shen, Siyuan Pan, Xiaoshuang Li, Xueqi Cheng

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on synthetic and real-world datasets demonstrate the superiority of ADC-GAN in conditional generative modeling compared to stateof-the-art classifier-based and projection-based conditional GANs. We first conduct experiments on a one-dimensional synthetic mixture of Gaussians, following the practices of (Gong et al., 2019), to qualitatively show the fidelity of distribution learning capability of ADC-GAN. In this section, we conduct experiments on three common real-world datasets: CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and Tiny-Image Net (Le & Yang, 2015) based on the Big GAN-Py Torch repository3 with our extensions4.
Researcher Affiliation Academia 1Data Intelligence System Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China 3Shanghai Jiao Tong University, Shanghai, China 4CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China.
Pseudocode No The paper provides mathematical formulations and proofs but does not include any pseudocode or algorithm blocks.
Open Source Code Yes In this section, we conduct experiments on three common real-world datasets: CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and Tiny-Image Net (Le & Yang, 2015) based on the Big GAN-Py Torch repository3 with our extensions4. (Footnote 4: https://github.com/houliangict/adcgan)
Open Datasets Yes We conduct experiments on three common real-world datasets: CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and Tiny-Image Net (Le & Yang, 2015)... We first conduct experiments on Image Net (128 128) following the experimental settings of Re ACGAN (Kang et al., 2021).
Dataset Splits No The paper mentions 'validation data' in the context of classification experiments on learned representations, but it does not specify concrete training/validation/test dataset splits (e.g., percentages or sample counts) for reproduction of the main GAN experiments.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions using 'Big GAN-Py Torch repository' and 'Py Torch-Studio GAN repository' and that the 'optimizer is Adam', but it does not specify exact version numbers for these software components or libraries.
Experiment Setup Yes The optimizer is Adam with learning rate of 2 10 4 on CIFAR-10/100 and 1 10 4 for the generator and 4 10 4 for the discriminator on Tiny-Image Net. We train all methods for 1000 and 500 epochs with batch size of 50 and 100 on CIFAR-10/100 and Tiny-Image Net, respectively. The discriminator/classifier are updated 4 and 2 times per generator update step on CIFAR-10/100 and Tiny-Image Net, respectively. ... The coefficient hyperparameters of AC-GAN and AM-GAN (Zhou et al., 2018) (cf. Appendix C for analysis) are set as λ = 0.2 as it performs the best. As for TAC-GAN and ADC-GAN, the coefficient hyperparameters are set as λ = 1.0 on CIFAR-10/100 and λ = 0.5 on Tiny-Image Net.