Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective

Authors: Tianlong Chen, Yu Cheng, Zhe Gan, Jingjing Liu, Zhangyang Wang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments endorse the effectiveness of our proposed framework, across various GAN architectures (SNGAN, Big GAN, and Style GANV2) and diverse datasets (CIFAR-10, CIFAR-100, Tiny-Image Net, Image Net, and multiple few-shot generation datasets).
Researcher Affiliation Collaboration Tianlong Chen1, Yu Cheng2, Zhe Gan2, Jingjing Liu3, Zhangyang Wang1 1University of Texas at Austin, 2Microsoft Corporation, 3Tsinghua University
Pseudocode Yes Algorithm 1 Data-Efficient Iterative Magnitude Pruning Procedures ... Algorithm 2 Training (Sparse) GAN with Dataand Feature-level Augmentations
Open Source Code Yes Codes are available at: https://github. com/VITA-Group/Ultra-Data-Efficient-GAN-Training.
Open Datasets Yes In this section, we conduct comprehensive experiments on Tiny-Image Net [88], Image Net [89], CIFAR-10 [90], and CIFAR-100 based on the unconditional SNGAN [23] and Style GAN-V2 [6], as well as the class-conditional Big GAN [2]. ... We compare these transfer learning approaches5 with our data-efficient training scheme. ... Our comparison experiments are conducted using Style GAN-V2 on the Animal Face [96] dataset (160 cats and 389 dogs), and the 100-shot Obama, Grumpy Cat, and Panda datasets provided by [1].
Dataset Splits Yes FID and IS are measured using 10K samples; the official validation set is utilized as the reference distribution. ... IS and FID are measured using 10K samples; the validation set is utilized as the reference.
Hardware Specification Yes All GANs are trained with 8 pieces of NVIDIA V100 32GB.
Software Dependencies No The paper mentions using "Studio GAN codebase", "Diff Aug [1]", "ADA [15]", and "Py Torch implementation2" but does not specify version numbers for any of these software components.
Experiment Setup Yes Big GAN takes learning rates of {4, 2, 2} 10 4 for G, of {1, 5, 2} 10 4 for D, batch sizes of {256, 256, 64}, 1 105 training iterations, and {1, 2, 5} D steps per G step on {Tiny-Image Net, Image Net, CIFAR} datasets. ... SNGAN uses learning rates of 2 10 4 for G and D, batch sizes of 64, 5 104 training iterations, and five D steps per G step on CIFAR. ... Adv Aug with PGD-1 and step size 0.01/0.001 is applied... ...applying Adv Aug to the last layer of D and the first layer of G, with PGD-1 and step size 0.01, seems to be a sweet-point configuration for data-efficient GAN training, which is hence adopted as our default setting.