Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis

Authors: Bingchen Liu, Yizhe Zhu, Kunpeng Song, Ahmed Elgammal

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We propose a light-weight GAN structure that gains superior quality on 1024 1024 resolution. Notably, the model converges from scratch with just a few hours of training on a single RTX-2080 GPU, and has a consistent performance, even with less than 100 training samples.
Researcher Affiliation Collaboration 1Playform Artrendex Inc., USA 2Department of Computer Science, Rutgers University
Pseudocode No The paper illustrates model structures with figures (Fig. 3 and Fig. 4) but does not contain pseudocode or algorithm blocks.
Open Source Code Yes The datasets and code are available at: https://github.com/odegeasslbc/Fast GAN-pytorch
Open Datasets Yes The datasets and code are available at: https://github.com/odegeasslbc/Fast GAN-pytorch and On 256 256 resolution, we test on Animal-Face Dog and Cat (Si & Zhu, 2011), 100-Shot-Obama, Panda, and Grumpy-cat (Zhao et al., 2020). On 1024 1024 resolution, we test on Flickr-Face HQ (FFHQ) (Karras et al., 2019), Oxford-flowers (Nilsback & Zisserman, 2006), art paintings from Wiki Art (wikiart.org), photographs on natural landscape from Unsplash (unsplash.com), Pokemon (pokemon.com), anime face, skull, and shell.
Dataset Splits No The paper mentions a training/testing ratio of 9:1 for latent space back-tracking, but does not explicitly state specific train/validation/test splits with percentages or counts for the main GAN training.
Hardware Specification Yes single RTX-2080 GPU, single RTX 2080-Ti GPU, Nvidia s RTX 2080-Ti GPU, RTX TITAN GPU.
Software Dependencies No The paper states 'implemented using Py Torch (Paszke et al., 2017)' but does not provide a specific version number for PyTorch or any other software dependencies.
Experiment Setup Yes We train the models 5 times with random seeds, We employ the hinge version of the adversarial loss, batch-size of 8, batch-size of 16, batch-size of 32.