LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation

Authors: Jianwei Yang, Anitha Kannan, Dhruv Batra, Devi Parikh

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct qualitative and quantitative evaluations on three datasets: 1) MNIST (Le Cun et al., 1998); 2) CIFAR-10 (Krizhevsky & Hinton, 2009); 3) CUB-200 (Welinder et al., 2010).
Researcher Affiliation Collaboration Jianwei Yang Virginia Tech Blacksburg, VA jw2yang@vt.edu Anitha Kannan Facebook AI Research Menlo Park, CA akannan@fb.com Dhruv Batra and Devi Parikh Georgia Institute of Technology Atlanta, GA {dbatra, parikh}@gatech.edu
Pseudocode Yes Pseudo-code for our approach and detailed model configuration are provided in the Appendix.
Open Source Code No The paper states "We develop LR-GAN based on open source code1. 1https://github.com/soumith/dcgan.torch", which refers to a third-party baseline (DCGAN) used for development, not the specific source code for the LR-GAN methodology itself.
Open Datasets Yes We mainly evaluate our approach on four datasets: MNIST-ONE (one digit) and MNIST-TWO (two digits) synthesized from MNIST (Le Cun et al., 1998), CIFAR-10 (Krizhevsky & Hinton, 2009) and CUB-200 (Welinder et al., 2010).
Dataset Splits No The paper mentions training and testing, and refers to a 'validation set' in the context of evaluation metrics, but does not explicitly provide specific details of training/validation/test dataset splits (e.g., percentages or exact counts) for their own experiments.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper states 'We develop LR-GAN based on open source code1. 1https://github.com/soumith/dcgan.torch', implying the use of Torch, but it does not specify version numbers for Torch or any other software dependencies.
Experiment Setup Yes The dimensions of random vectors and hidden vectors are all set to 100. In both generator and discriminator, all the (fractional) convolutional layers have 4 4 filter size with stride 2. Please see the Appendix (Section 6.2) for details about the configurations.