On GANs and GMMs

Authors: Eitan Richardson, Yair Weiss

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we examine the utility of GANs in learning statistical models of images by comparing them to perhaps the simplest statistical model, the Gaussian Mixture Model. First, we present a simple method to evaluate generative models based on relative proportions of samples that fall into predetermined bins. ... Second, we compare the performance of GANs to GMMs trained on the same datasets. ... Our results show that GMMs can generate realistic samples (although less sharp than those of GANs) but also capture the full distribution, which GANs fail to do.
Researcher Affiliation Academia Eitan Richardson School of Computer Science and Engineering The Hebrew University of Jerusalem Jerusalem, Israel eitanrich@cs.huji.ac.il Yair Weiss School of Computer Science and Engineering The Hebrew University of Jerusalem Jerusalem, Israel yweiss@cs.huji.ac.il
Pseudocode No The paper includes mathematical equations and descriptions of methods, but it does not present any structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/eitanrich/gans-n-gmms
Open Datasets Yes We conduct our experiments on three popular datasets of natural images: Celeb A [27] (aligned, cropped and resized to 64 64), SVHN [30] and MNIST [25].
Dataset Splits No The paper mentions using training and test sets for evaluation, as indicated by 'TRAIN' and 'TEST' scores in Tables 1-3, but it does not explicitly specify exact percentages or sample counts for training, validation, and test splits, nor does it explicitly mention a dedicated validation set.
Hardware Specification No The paper states that optimization was performed 'on GPU' ('utilize available differentiable programming frameworks [1] that perform the optimization on GPU' from Section 3.1), but it does not provide specific details such as the model of the GPU or any other hardware specifications.
Software Dependencies No The paper mentions using 'differentiable programming frameworks [1]' for optimization, with reference [1] pointing to TensorFlow, but it does not provide specific version numbers for TensorFlow or any other software dependencies.
Experiment Setup No The paper mentions general aspects of the experimental setup, such as using K-means clustering for initialization and generating 20,000 samples for evaluation, and states that 'The supplementary material provides additional details about the training process.' However, it does not provide specific hyperparameter values (e.g., learning rate, batch size) or detailed system-level training settings within the main text.