Masked Generative Adversarial Networks are Data-Efficient Generation Learners
Authors: Jiaxing Huang, Kaiwen Cui, Dayan Guan, Aoran Xiao, Fangneng Zhan, Shijian Lu, Shengcai Liao, Eric Xing
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that Masked GAN achieves superior performance consistently across different network architectures (e.g., CNNs including Big GAN and Style GAN-v2 and Transformers including Trans GAN and GANformer) and datasets (e.g., CIFAR-10, CIFAR-100, Image Net, 100-shot, AFHQ, FFHQ and Cityscapes). |
| Researcher Affiliation | Academia | Jiaxing Huang1 , Kaiwen Cui1 , Dayan Guan1, Aoran Xiao1, Fangneng Zhan2, Shijian Lu1 , Shengcai Liao3, Eric Xing45 1 School of Computer Science and Engineering, Nanyang Technological University, Singapore 2 Max Planck Institute for Informatics, Germany 3 Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE 4 School of Computer Science, Carnegie Mellon University, USA 5 Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE |
| Pseudocode | No | The paper does not contain any sections or figures explicitly labeled 'Pseudocode' or 'Algorithm'. |
| Open Source Code | No | In the 'Questions for Paper Analysis' section, for the question 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?', the answer provided is '[N/A]'. This implies that the code is not openly provided. |
| Open Datasets | Yes | Sections 4.1 presents experiments with Big GAN over datasets CIFAR-10 [34], CIFAR-100 [34] and Image Net [15]. Section 4.2 reports experimental results with Style GAN-v2 over datasets 100-shot [58], AFHQ [11] and FFHQ [32]. Section 4.3 presents experiments with two transformer-based GANs (Trans GAN and GANformer) over datasets CIFAR-10, CIFAR-100 and Cityscapes [12]. |
| Dataset Splits | Yes | We calculate FID ( ) scores with 10K generated samples and the validation set, as in [58]. All models are trained with 100%, 20% or 10% training data (i.e., 50K, 10K or 5K images), and evaluated over the validation set (10K images). |
| Hardware Specification | No | This research was also carried out on the High Performance Computing resources at Inception Institute of Artificial Intelligence, Abu Dhabi. This statement mentions a computing resource but does not provide specific hardware details such as GPU/CPU models, memory, or processor types. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers. |
| Experiment Setup | No | The implementation and dataset details are provided in appendix. This indicates that specific experimental setup details like hyperparameters are not present in the main body of the paper. |