Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
Authors: Emily L. Denton, Soumith Chintala, arthur szlam, Rob Fergus
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40% of the time, compared to 10% for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset. We evaluate our approach using 3 different methods: (i) computation of log-likelihood on a held out image set; (ii) drawing sample images from the model and (iii) a human subject experiment that compares (a) our samples, (b) those of baseline methods and (c) real images. |
| Researcher Affiliation | Collaboration | Emily Denton Dept. of Computer Science Courant Institute New York University Soumith Chintala Arthur Szlam Rob Fergus Facebook AI Research New York |
| Pseudocode | No | The paper describes procedures in text and diagrams but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Torch training and evaluation code, along with model specification files can be found at http://soumith.ch/eyescream/. |
| Open Datasets | Yes | We apply our approach to three datasets: (i) CIFAR10 [17] 32x32 pixel color images of 10 different classes, 100k training samples with tight crops of objects; (ii) STL10 [2] 96x96 pixel color images of 10 different classes, 100k training samples (we use the unlabeled portion of data); and (iii) LSUN [32] 10M images of 10 different natural scene types, downsampled to 64x64 pixels. |
| Dataset Splits | No | The paper mentions using a 'validation set' for model selection and log-likelihood comparison, but does not provide specific details on how this set was created (e.g., percentages, sample counts, or splitting methodology). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Torch training and evaluation code' but does not provide specific version numbers for Torch or any other software dependencies. |
| Experiment Setup | Yes | The loss in Eqn. 2 is trained using SGD with an initial learning rate of 0.02, decreased by a factor of (1 + 4 * 10^-4) at each epoch. Momentum starts at 0.5, increasing by 0.0008 at epoch up to a maximum of 0.8. |