A Unified View of cGANs with and without Classifiers

Authors: Si-An Chen, Chun-Liang Li, Hsuan-Tien Lin

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct our experiments on CIFAR-10 [20] and Tiny Image Net [22] for analysis, and Image Net [6] for large-scale empirical study. ... In our experiment, we use two common metrics, Frechet Inception Distance [FID; 14] and Inception Score [IS; 44], to evaluate our generation quality and diversity.
Researcher Affiliation Collaboration Si-An Chen National Taiwan University d09922007@csie.ntu.edu.tw Chun-Liang Li Google Cloud AI chunliang@google.com Hsuan-Tien Lin National Taiwan University htlin@ntu.edu.tw
Pseudocode No The overall training procedure of ECGAN is presented in Appendix E. This appendix describes the procedure in text, not a formal pseudocode block.
Open Source Code Yes The code is available at https://github.com/sian-chen/Py Torch-ECGAN.
Open Datasets Yes We conduct our experiments on CIFAR-10 [20] and Tiny Image Net [22] for analysis, and Image Net [6] for large-scale empirical study. All datasets are publicly available for research use.
Dataset Splits No The paper provides training and test set sizes in Table 2 but does not explicitly detail a validation set split or its size.
Hardware Specification Yes The experiments take 1-2 days on single GPU (Nvidia Tesla V100) machines for CIFAR-10, Tiny Image Net, and take 6 days on 8-GPU machines for Image Net.
Software Dependencies No We use Studio GAN [16] to conduct our experiments. Studio GAN is a Py Torch-based project distributed under the MIT license... The code is available at https://github.com/sian-chen/Py Torch-ECGAN. No specific version numbers for PyTorch or other dependencies are mentioned.
Experiment Setup Yes We fix the learning rate for generators and discriminators to 0.0001 and 0.0004, respectively, and tune λclf in {1, 0.1, 0.05, 0.01}. We follow the setting λc = 1 in [16] when using 2C loss, and set α = 1 when applying unconditional GAN loss.