UniGAN: Reducing Mode Collapse in GANs using a Uniform Generator

Authors: Ziqi Pan, Li Niu, Liqing Zhang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results verify the effectiveness of our Uni GAN in learning a uniform generator and improving uniform diversity.
Researcher Affiliation Academia Ziqi Pan, Li Niu , Liqing Zhang Mo E Key Lab of Artificial Intelligence Department of Computer Science and Engineering Shanghai Jiao Tong University, Shanghai, China
Pseudocode No The paper describes its methodology in text and equations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes We include code in the supplementary material, including the implementation of our NF-based generator, the LT technique, the generator uniformity regularization, and the udiv metric.
Open Datasets Yes We also provide results on simple datasets including MNIST [58], Fashion MNIST [59] and their colored version [22], and CIFAR10 [60]. We also provide results on natural image datasets including Celeb A [61], FFHQ [62], AFHQ [63] and LSUN [64].
Dataset Splits No The paper states that training details, which would include data splits, are provided in the supplementary material, not in the main text: 'Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See supplementary.'
Hardware Specification No The paper states that details about the total amount of compute and type of resources used are provided in the supplementary material, not in the main text: 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See supplementary.'
Software Dependencies No The paper does not provide specific version numbers for software dependencies (e.g., libraries, frameworks, or operating systems) used in the experiments within the main text.
Experiment Setup No The paper indicates that 'all the training details (e.g., data splits, hyperparameters, how they were chosen)' are specified in the supplementary material, not in the main text.