MCL-GAN: Generative Adversarial Networks with Multiple Specialized Discriminators

Authors: Jinyoung Choi, Bohyung Han

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of our algorithm using multiple evaluation metrics in the standard datasets for diverse tasks. We evaluate the performance of MCL-GAN on unconditional and conditional image generation.
Researcher Affiliation Academia Jinyoung Choi1,3 Bohyung Han1,2,3 1ECE, 2IPAI, 3ASRI Seoul National University, Korea {jin0.choi,bhhan}@snu.ac.kr
Pseudocode No The paper does not contain a pseudocode block or algorithm block.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See Appendix C and D, and the supplementary material.
Open Datasets Yes We run the unconditional GAN experiment on four distinct datasets including MNIST [39], Fashion MNIST [40], CIFAR-10 [41] and Celeb A [42].
Dataset Splits Yes For the Style GAN2 experiments on Celeb A, we use the first and last 30K images from the align&cropped version of the train and validation splits following [30]. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Section 2 and Appendix C.
Hardware Specification Yes Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix A.3.
Software Dependencies No The paper mentions applying the method to different GAN architectures (DCGAN, StyleGAN2) but does not list specific software dependencies with version numbers.
Experiment Setup Yes 5.2.1 Experiment setup and evaluation protocol. Appendix C describes more details of our setting. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Section 2 and Appendix C.