Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models

Authors: Aditya Grover, Manik Dhar, Stefano Ermon

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results on MNIST and CIFAR-10 demonstrate that hybrid training can attain high held-out likelihoods while retaining visual fidelity in the generated samples.
Researcher Affiliation Academia Aditya Grover, Manik Dhar, Stefano Ermon Department of Computer Science Stanford University {adityag, dmanik, ermon}@cs.stanford.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code for reproducing the results is available at https://github.com/ermongroup/flow-gan.
Open Datasets Yes We compare learning of Flow-GANs using MLE and adversarial learning (ADV) for the MNIST dataset of handwritten digits (Le Cun, Cortes, and Burges 2010) and the CIFAR-10 dataset of natural images (Krizhevsky and Hinton 2009).
Dataset Splits No The paper mentions 'validation NLLs' and 'train NLLs' in Section 3.3 and refers to MNIST and CIFAR-10 datasets, but it does not explicitly state the specific training/validation/test split percentages or sample counts in the provided text.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions various models and architectures (e.g., DCGAN, NICE, Real-NVP) but does not provide specific software dependencies with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x).
Experiment Setup No The paper mentions chosen architectures (NICE, Real-NVP) and divergences (Wasserstein distance) and states that 'Further experimental details are provided in a companion technical report', indicating that specific hyperparameters or training configurations are not in the main text.