Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training

Authors: Minguk Kang, Woohyeon Shim, Minsu Cho, Jaesik Park

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate our model, we conduct image generation experiments on CIFAR10 [30], Tiny Image Net [32], CUB200 [33], and Image Net [31] datasets. Through extensive experiments, we demonstrate that Re ACGAN beats both the classifier-based and projection-based GANs, improving over the state of the art by 2.5%, 15.8%, 5.1%, and 14.5% in terms of Fréchet Inception Distance (FID) [34] on the four datasets, respectively.
Researcher Affiliation Academia Minguk Kang Woohyeon Shim Minsu Cho Jaesik Park Pohang University of Science and Technology (POSTECH), South Korea {mgkang, wh.shim, mscho, jaesik.park}@postech.ac.kr
Pseudocode Yes We attach the algorithm table in Appendix A.
Open Source Code Yes Model weights and a software package that provides implementations of representative c GANs and all experiments in our paper are available at https://github.com/POSTECH-CVLab/Py Torch-Studio GAN.
Open Datasets Yes To verify the effectiveness of Re ACGAN, we conduct conditional image generation experiments using five datasets: CIFAR10 [30], Tiny-Image Net [32], CUB200 [33], Image Net [31], and AFHQ [41] datasets and four evaluation metrics: Inception Score (IS) [42], Fréchet Inception Distance (FID) [34], and F0.125 (Precision) and F8 (Recall) [43]. The details on the training datasets are in Appendix C.1.
Dataset Splits Yes We use the validation split as the default reference set, but we use the test split of CIFAR10 and the training split of CUB200 and AFHQ due to the absence or lack of the validation dataset.
Hardware Specification No The paper states, 'We include the total amount of compute and the type of resources in Appendix H.' However, Appendix H is not provided in the supplied text, so specific hardware details are not available.
Software Dependencies No The paper mentions 'Py Torch-Studio GAN library' and 'Style GAN2' but does not specify version numbers for PyTorch, CUDA, or other key software components required for replication.
Experiment Setup Yes Before conducting main experiments, we perform hyperparameter search with candidates of a temperature τ {0.125, 0.25, 0.5, 0.75, 1.0} and a positive margin mp {0.5, 0.75, 0.9, 0.95, 0.98, 1.0}. We set a negative margin mn as 1 mp to reduce search time. Through extensive experiments with 3 runs per each setting, we select τ with {0.5, 0.75, 0.25, 0.5, 0.25} and mp with {0.98, 1.0, 0.95, 0.98, 0.90} for CIFAR10, Tiny-Image Net, CUB200, Image Net 256 B.S., and Image Net 2048 B.S. experiments, respectively.