Generative Adversarial Nets
Authors: Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. |
| Researcher Affiliation | Collaboration | Ian J. Goodfellow , Jean Pouget-Abadie , Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair , Aaron Courville, Yoshua Bengio D epartement d informatique et de recherche op erationnelle Universit e de Montr eal Montr eal, QC H3C 3J7... Ian Goodfellow is now a research scientist at Google, but did this work earlier as a Ude M student Jean Pouget-Abadie did this work while visiting Universit e de Montr eal from Ecole Polytechnique. Sherjil Ozair is visiting Universit e de Montr eal from Indian Institute of Technology Delhi Yoshua Bengio is a CIFAR Senior Fellow. |
| Pseudocode | Yes | Algorithm 1 Minibatch stochastic gradient descent training of generative adversarial nets. |
| Open Source Code | Yes | All code and hyperparameters available at http://www.github.com/goodfeli/adversarial |
| Open Datasets | Yes | We trained adversarial nets an a range of datasets including MNIST[21], the Toronto Face Database (TFD) [27], and CIFAR-10 [19]. |
| Dataset Splits | No | The paper mentions using a 'validation set' for cross-validation of the σ parameter for evaluation, but it does not specify concrete dataset split information (percentages, sample counts, or clear predefined splits) for training, validation, and testing needed to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. It only mentions general computing resources like 'Compute Canada, and Calcul Qu ebec for providing computational resources'. |
| Software Dependencies | No | The paper mentions using 'Pylearn2 [11] and Theano [6, 1]' but does not provide specific version numbers for these software dependencies, which are necessary for reproducibility. |
| Experiment Setup | Yes | The generator nets used a mixture of rectifier linear activations [17, 8] and sigmoid activations, while the discriminator net used maxout [9] activations. Dropout [16] was applied in training the discriminator net... The number of steps to apply to the discriminator, k, is a hyperparameter. We used k = 1, the least expensive option, in our experiments... We used momentum in our experiments. |