Disconnected Manifold Learning for Generative Adversarial Networks
Authors: Mahyar Khayatkhoei, Maneesh K. Singh, Ahmed Elgammal
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct several experiments to illustrate the aforementioned shortcoming of GANs, its consequences in practice, and the effectiveness of our proposed modifications in alleviating these issues. |
| Researcher Affiliation | Collaboration | Mahyar Khayatkhoei Department of Computer Science Rutgers University m.khayatkhoei@cs.rutgers.edu Ahmed Elgammal Department of Computer Science Rutgers University elgammal@cs.rutgers.edu Maneesh Singh Verisk Analytics maneesh.singh@verisk.com |
| Pseudocode | Yes | See Appendix A for details of our algorithm and the DMGAN objectives. (Appendix A contains Algorithm 1 Training DMWGAN) |
| Open Source Code | No | The paper does not contain any explicit statement or link providing access to the source code for the described methodology. |
| Open Datasets | Yes | MNIST [16] is particularly suitable since samples with different class labels can be reasonably interpreted as lying on disjoint manifolds... We combine 20K face images from Celeb A dataset [17] and 20K bedroom images from LSUN Bedrooms dataset [27] to construct a natural image dataset supported on a disconnected manifold. |
| Dataset Splits | No | The paper does not explicitly provide information about training, validation, and test dataset splits. |
| Hardware Specification | No | The paper describes network architectures and training parameters but does not specify the hardware (e.g., GPU/CPU models) used for running the experiments. |
| Software Dependencies | No | The paper mentions using Adam optimizer and DCGAN-like networks but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | Yes | In all experiments, we train each model for a total of 200 epochs with a five to one update ratio between discriminator and generator... See Appendix B for details of our networks and the hyperparameters. (Appendix B states: We use Adam optimizer with β1 = 0 and β2 = 0.9 for both generator and discriminator. Learning rate for generator and discriminator is 1e-4, and for Q and prior is 1e-5. We also use a learning rate decay of 0.5 per 10000 iterations for the prior training. We use batch size of 64 for all experiments. We use 20 generators for MNIST and 5 for Face-Bed, unless otherwise stated. We train all models for 200 epochs.) |