DRAW: A Recurrent Neural Network For Image Generation
Authors: Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Rezende, Daan Wierstra
ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Section 4 provides experimental results on the MNIST, Street View House Numbers and CIFAR-10 datasets, with examples of generated images; and concluding remarks are given in Section 5. We assess the ability of DRAW to generate realisticlooking images by training on three datasets of progressively increasing visual complexity: MNIST (Le Cun et al., 1998), Street View House Numbers (SVHN) (Netzer et al., 2011) and CIFAR-10 (Krizhevsky, 2009). |
| Researcher Affiliation | Industry | Karol Gregor KAROLG@GOOGLE.COM Ivo Danihelka DANIHELKA@GOOGLE.COM Alex Graves GRAVESA@GOOGLE.COM Danilo Jimenez Rezende DANILOR@GOOGLE.COM Daan Wierstra WIERSTRA@GOOGLE.COM Google Deep Mind |
| Pseudocode | No | The paper describes the model's equations and iterative steps but does not present them in a formally labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper mentions an accompanying video but does not provide an explicit statement about releasing source code for the methodology or a link to a code repository. |
| Open Datasets | Yes | We assess the ability of DRAW to generate realisticlooking images by training on three datasets of progressively increasing visual complexity: MNIST (Le Cun et al., 1998), Street View House Numbers (SVHN) (Netzer et al., 2011) and CIFAR-10 (Krizhevsky, 2009). |
| Dataset Splits | Yes | The SVHN training set contains 231,053 images, and the validation set contains 4,701 images. |
| Hardware Specification | No | No specific hardware details (e.g., CPU, GPU models, or memory) used for running experiments were mentioned. |
| Software Dependencies | No | The paper mentions using LSTM and the Adam optimization algorithm but does not specify version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | Network hyper-parameters for all the experiments are presented in Table 3. The Adam optimisation algorithm (Kingma & Ba, 2014) was used throughout. Table 3 shows "Task #glimpses LSTM #h #z Read Size Write Size" with specific numerical values for each. |