Bayesian GAN

Authors: Yunus Saatci, Andrew G. Wilson

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our proposed Bayesian GAN (henceforth titled Bayes GAN) on six benchmarks (synthetic, MNIST, CIFAR-10, SVHN, and Celeb A) each with four different numbers of labelled examples. We consider multiple alternatives, including: the DCGAN [9], the recent Wasserstein GAN (W-DCGAN) [1], an ensemble of ten DCGANs (DCGAN-10) which are formed by 10 random subsets 80% the size of the training set, and a fully supervised convolutional neural network.
Researcher Affiliation Collaboration Yunus Saatchi Uber AI Labs Andrew Gordon Wilson Cornell University
Pseudocode Yes Algorithm 1 One iteration of sampling for the Bayesian GAN.
Open Source Code Yes We have made code and tutorials available at https://github.com/andrewgordonwilson/bayesgan.
Open Datasets Yes We evaluate our proposed Bayesian GAN (henceforth titled Bayes GAN) on six benchmarks (synthetic, MNIST, CIFAR-10, SVHN, and Celeb A)... MNIST is a well-understood benchmark dataset consisting of 60k (50k train, 10k test) labeled images of hand-written digits. CIFAR-10 is also a popular benchmark dataset [7], with 50k training and 10k test images... The Street View House Numbers (SVHN) dataset... The large Celeb A dataset contains 120k celebrity faces...
Dataset Splits Yes MNIST is a well-understood benchmark dataset consisting of 60k (50k train, 10k test) labeled images of hand-written digits. CIFAR-10 is also a popular benchmark dataset [7], with 50k training and 10k test images... Standard train/test splits are used for MNIST, CIFAR-10 and SVHN. For Celeb A we use a test set of size 10k.
Hardware Specification Yes All experiments were performed on a single Titan X GPU for consistency, but Bayes GAN and DCGAN-10 could be sped up to approximately the same runtime as DCGAN through multi-GPU parallelization.
Software Dependencies No No specific software dependencies with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) were explicitly stated for reproducibility.
Experiment Setup Yes For the Bayesian GAN we place a N(0, 10I) prior on both the generator and discriminator weights and approximately integrate out z using simple Monte Carlo samples. We run Algorithm 1 for 5000 iterations and then collect weight samples every 1000 iterations and record out-of-sample predictive accuracy using Bayesian model averaging (see Eq. 5). For Algorithm 1 we set Jg = 10, Jd = 1, M = 2, and nd = ng = 64. As suggested in Appendix G of Chen et al. [3], we employed a learning rate schedule which decayed according to γ/d where d is set to the number of unique real datapoints seen so far.