VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning
Authors: Akash Srivastava, Lazar Valkov, Chris Russell, Michael U. Gutmann, Charles Sutton
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On an extensive set of synthetic and real world image datasets, VEEGAN indeed resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples. |
| Researcher Affiliation | Academia | Akash Srivastava School of Informatics University of Edinburgh akash.srivastava@ed.ac.uk Lazar Valkov School of Informatics University of Edinburgh L.Valkov@sms.ed.ac.uk Chris Russell The Alan Turing Institute London crussell@turing.ac.uk Michael U. Gutmann School of Informatics University of Edinburgh Michael.Gutmann@ed.ac.uk Charles Sutton School of Informatics & The Alan Turing Institute University of Edinburgh csutton@inf.ed.ac.uk |
| Pseudocode | Yes | Algorithm 1 VEEGAN training |
| Open Source Code | Yes | VEEGAN is a Variational Encoder Enhancement to Generative Adversarial Networks. https://akashgit. github.io/VEEGAN/ |
| Open Datasets | Yes | On an extensive set of synthetic and real world image datasets, VEEGAN indeed resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples. ... Stacked MNIST ... CIFAR-10 |
| Dataset Splits | No | The paper mentions using 26000 samples and averaging results over five runs but does not specify explicit training, validation, or test dataset splits, nor does it detail a cross-validation setup. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using a 'standard implementation of DCGAN [17]' and provides a GitHub link for it, implying TensorFlow. However, no specific version numbers for TensorFlow, Python, or other libraries are provided. |
| Experiment Setup | Yes | Generally, we found that VEEGAN performed well with default hyperparameter values, so we did not tune these. ... For the unrolled GAN, we set the number of unrolling steps to five as suggested in the authors reference implementation. ... For all methods other than VEEGAN, we use the enhanced generator loss function suggested in [7]... Finally, for VEEGAN we pretrain the reconstructor by taking a few stochastic gradient steps with respect to θ before running Algorithm 1. ... we use the same network architectures for the reconstructors and the generators for all methods, namely, fully-connected MLPs with two hidden layers. For the discriminator we use a two layer MLP without dropout or normalization layers. |