Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
AdaGAN: Boosting Generative Models
Authors: Ilya O. Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann SIMON-GABRIEL, Bernhard Schölkopf
NeurIPS 2017 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we report initial empirical results in Section 4, where we compare Ada GAN with several benchmarks, including original GAN and uniform mixture of multiple independently trained GANs. and 4 Experiments We ran Ada GAN on toy datasets, for which we can interpret the missing modes in a clear and reproducible way, and on MNIST, which is a high-dimensional dataset. |
| Researcher Affiliation | Collaboration | Ilya Tolstikhin MPI for Intelligent Systems Tübingen, Germany EMAIL Sylvain Gelly Google Brain Zürich, Switzerland EMAIL Olivier Bousquet Google Brain Zürich, Switzerland EMAIL Carl-Johann Simon-Gabriel MPI for Intelligent Systems Tübingen, Germany EMAIL Bernhard Schölkopf MPI for Intelligent Systems Tübingen, Germany EMAIL |
| Pseudocode | Yes | ALGORITHM 1 Ada GAN, a meta-algorithm to construct a strong mixture of T individual generative models (f.ex. GANs), trained sequentially. |
| Open Source Code | Yes | Code available online at https://github.com/tolstikhin/adagan |
| Open Datasets | Yes | We ran Ada GAN on toy datasets, for which we can interpret the missing modes in a clear and reproducible way, and on MNIST, which is a high-dimensional dataset. and We ran experiments both on the original MNIST and on the 3-digit MNIST (MNIST3) [5, 4] dataset, obtained by concatenating 3 randomly chosen MNIST images to form a 3-digit number between 0 and 999. |
| Dataset Splits | No | The paper mentions optimizing the learning rate on a 'validation set' but does not provide specific details on the split percentages, sample counts, or the methodology for creating these splits for the toy datasets, nor does it specify splits for MNIST. |
| Hardware Specification | No | The paper does not provide specific hardware details (such as GPU/CPU models, memory, or processor types) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names with their exact versions) needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions that the learning rate was optimized via grid search and that hyperparameter search was performed, but it does not provide concrete values for specific hyperparameters like learning rate, batch size, or number of epochs. |