Quality Aware Generative Adversarial Networks

Authors: KANCHARLA PARIMALA, Sumohana Channappayya

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate state-of-the-art performance using the Wasserstein GAN gradient penalty (WGAN-GP) framework over CIFAR-10, STL10 and Celeb A datasets. From the figures and tables, we see that QAGANs are very competitive with the state-of-the-art methods on all three datasets.
Researcher Affiliation Academia Parimala Kancharla, Sumohana S. Channappayya Department of Electrical Engineering Indian Institute of Technology Hyderabad {ee15m17p100001, sumohana}@iith.ac.in
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks. It describes mathematical formulations and procedures in text and equations.
Open Source Code No The paper does not contain any explicit statements about releasing source code for their methodology, nor does it provide a link to a code repository.
Open Datasets Yes Datasets: We have evaluated the efficacy of proposed regularizers on three datasets: 1) CIFAR-10 [35] (60K images of 32 × 32 resolution), 2) Celeb A [Liu+15](202.6K face images cropped and resized to resolution 64 × 64. 3) STL-10 [CNL11] (100K images of resolution 96 × 96 and 48 × 48).
Dataset Splits No The paper mentions training on CIFAR-10, Celeb A, and STL-10 datasets and evaluation using Inception Score and FID, but it does not specify any training/validation/test dataset splits or how a validation set was specifically used.
Hardware Specification No The paper mentions a 'GPU donation' from NVIDIA, indicating GPUs were used, but it does not provide any specific details about the GPU model, CPU, memory, or other hardware specifications used for experiments.
Software Dependencies No The paper mentions using 'Adam as the optimizer' and 'TensorFlow' and 'Chainer' implementations for FID scores, but it does not specify version numbers for any of these software components.
Experiment Setup Yes We have used Adam as the optimizer with the standard momentum parameters β1 = 0. and β2 = 0.9. The initial learning rate was set to 0.0002 for CIFAR-10 and STL-10 datasets and 0.0001 for the Celeb A dataset. The learning rate is decreased adaptively. We have empirically chosen the hyper parameters λ1 and λ2 to be 1 and 0.1 respectively. All our models are trained for 100K iterations with a batch size of 64. The discriminator is updated five times for every update of the generator.