On Leveraging Pretrained GANs for Generation with Limited Data
Authors: Miaoyun Zhao, Yulai Cong, Lawrence Carin
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | An extensive set of experiments is presented to demonstrate the effectiveness of the proposed techniques on generation with limited data. [...] Extensive experiments are conducted to verify the effectiveness of the proposed techniques. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, Duke University, Durham NC, USA. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at github.com/Miaoyun Zhao/GANTransfer Limited Data. |
| Open Datasets | Yes | Taking natural image generation as an illustrative example, we demonstrate the effectiveness of the proposed techniques by transferring the source GP-GAN model pretrained on the large-scale Image Net (containing 1.2 million images from 1,000 classes) to facilitate generation in perceptually-distinct target domains with (i) four smaller datasets, i.e., Celeb A (Liu et al., 2015) (202,599), Flowers (Nilsback & Zisserman, 2008) (8,189), Cars (Krause et al., 2013) (8,144), and Cathedral (Zhou et al., 2014) (7,350); (ii) their modified variants containing only 1,000 images; and (iii) two extremely limited datasets consisting of 25 images (following (Noguchi & Harada, 2019)). |
| Dataset Splits | No | The paper mentions using the 'whole Celeb A data for training' in Section 3.1.1 and different dataset sizes (e.g., 1,000 or 25 images) as the target data for experiments, but it does not specify explicit train/validation splits or percentages within these datasets for reproducibility. |
| Hardware Specification | Yes | The Titan Xp GPU used was donated by the NVIDIA Corporation. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies used in the experiments. |
| Experiment Setup | Yes | After 60,000 training iterations (generative quality stabilizes by then) [...] and apply GP (gradient penalty) on both real and fake samples to alleviate overfitting. |