Progressive Augmentation of GANs

Authors: Dan Zhang, Anna Khoreva

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally demonstrate the effectiveness of PA-GAN across different architectures and on multiple benchmarks for the image synthesis task, on average achieving 3 point improvement of the FID score.
Researcher Affiliation Industry Dan Zhang Bosch Center for Artificial Intelligence dan.zhang2@bosch.com Anna Khoreva Bosch Center for Artificial Intelligence anna.khoreva@bosch.com
Pseudocode No The paper describes the proposed method in text and illustrates it with figures, but does not provide any structured pseudocode or algorithm blocks.
Open Source Code Yes 1https://github.com/boschresearch/PA-GAN
Open Datasets Yes Datasets: We consider four datasets: Fashion-MNIST [34], CIFAR10 [17], CELEBA-HQ (128 128) [15] and Tiny-Image Net (a simplified version of Image Net [7])
Dataset Splits No Datasets: We consider four datasets: Fashion-MNIST [34], CIFAR10 [17], CELEBA-HQ (128 128) [15] and Tiny-Image Net (a simplified version of Image Net [7]), with the training set sizes equal to 60k, 50k, 27k and 100k plus the test set sizes equal to 10k, 10k, 3k, and 10k, respectively. The paper explicitly provides training and test set sizes, but does not specify a separate validation dataset split or its size.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper mentions using SN DCGAN [24] and SA GAN [35] implementations provided by [18, 35] and the Adam optimizer [16], but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Training details: We use uniformly distributed noise vector z [ 1, 1]128, the mini-batch size of 64, and Adam optimizer [16]. The two time-scale update rule (TTUR) [13] is considered when choosing the learning rates for D and G. For progression scheduling KID4 is evaluated using samples from the training set every t = 10k iterations, except for Tiny-Image Net with t = 20k given its approximately 2 larger training set. More details are provided in Sec. S8 of the supp. material. Following [24, 35], we train SN DCGAN and SA GAN [35] with the non-saturation (NS) and hinge loss, respectively.