NICE: NoIse-modulated Consistency rEgularization for Data-Efficient GANs

Authors: Yao Ni, Piotr Koniusz

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our comprehensive experiments confirm the effectiveness of NICE in penalizing gradients and reducing the generalization gap. Despite the simple design, NICE significantly improves the training stability and generalization of GANs, outperforming alternative approaches in preventing discriminator overfitting. NICE achieves superior results on challenging limited data benchmarks, including CIFAR-10/100, Image Net, FFHQ, and low-shot image generation tasks. [...] We conduct experiments on CIFAR-10/100 [28] using Big GAN [7] and Omni GAN [71], as well as on Image Net [10] using Big GAN for conditional image generation.
Researcher Affiliation Academia Yao Ni , Piotr Koniusz , , The Australian National University Data61 CSIRO firstname.lastname@anu.edu.au
Pseudocode No The paper includes theoretical derivations, figures, and descriptions of methods but does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The corresponding author. Code: https://github.com/Maxwell Yao Ni/NICE
Open Datasets Yes We conduct experiments on CIFAR-10/100 [28] using Big GAN [7] and Omni GAN [71], as well as on Image Net [10] using Big GAN for conditional image generation. We also evaluate our method on low-shot datasets [68], which include 100-shot Obama/Panda/Grumpy Cat and Animal Face Dog/Cat [51], and FFHQ [24] using Style GAN2 [25].
Dataset Splits Yes CIFAR-10 has 50K/10K training/testing images with resolution of 32 32 from 10 categories, whereas CIFAR-100 has 100 classes. [...] Tables 1 and 2 demonstrate that NICE consistently outperforms baselines such as Big GAN, Le Cam+DA, Omni GAN and Omni GAN+ADA on CIFAR-10 and CIFAR-100, firmly establishing its superiority. [...] given different percentage of training data.
Hardware Specification Yes Experiments were performed on NVIDA A100 GPUs.
Software Dependencies No The paper mentions using specific GAN architectures like Big GAN, Omni GAN, and Style GAN2, but it does not specify the version numbers for the underlying software libraries or programming languages (e.g., Python, PyTorch/TensorFlow versions).
Experiment Setup Yes We follow [68] and train the Omni GAN and Big GAN for 1K epochs on the full data and 5K epochs on 10%/20% data setting. We equip the discriminator with the adaptive noise modulation after convolution weights c {C1, C2, CS} at all blocks l {1, 2, 3, 4}. We set β = 0.001, η = 0.5, γ = 10.