LayoutGAN: Generating Graphic Layouts with Wireframe Discriminators
Authors: Jianan Li, Jimei Yang, Aaron Hertzmann, Jianming Zhang, Tingfa Xu
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the effectiveness of Layout GAN in various experiments including MNIST digit generation, document layout generation, clipart abstract scene generation and tangram graphic design. 4 EXPERIMENTS The implementation is based on Tensor Flow (Abadi et al., 2016). The network parameters are initialized from zero-mean Gaussian with standard deviation of 0.02. All the networks are optimized using Adam (Kingma & Ba, 2014) with a fixed learning rate of 0.00002. Detailed architectures can be found in the appendix. |
| Researcher Affiliation | Collaboration | Jianan Li1 , Jimei Yang2, Aaron Hertzmann2, Jianming Zhang2, Tingfa Xu1 1. Beijing Institute of Technology 2. Adobe Research {20090964,xutingfa}@bit.edu.cn, {jimyang,hertzman,jianmzha}@adobe.com |
| Pseudocode | No | The paper describes the architecture and mathematical formulations but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not explicitly state that open-source code is provided for the methodology or provide a link to a code repository. |
| Open Datasets | Yes | We validate the effectiveness of Layout GAN in various experiments including MNIST digit generation, document layout generation, clipart abstract scene generation and tangram graphic design. For training data, we collect totally around 25,000 layouts from real documents. We use the abstract scene dataset (Zitnick et al., 2016) including boy, girl, glasses, hat, sun, and tree elements. We collect 149 tangram graphic designs including animals, people and objects. |
| Dataset Splits | No | MNIST is a handwritten digit database consisting of 60,000 training and 10,000 testing images. No explicit training, validation, and test dataset splits were found for all experiments, only MNIST mentioned a training and testing split. |
| Hardware Specification | No | The paper states 'The implementation is based on Tensor Flow (Abadi et al., 2016)' but does not provide specific hardware details such as GPU models, CPU types, or memory used for running experiments. |
| Software Dependencies | No | The paper mentions 'The implementation is based on Tensor Flow (Abadi et al., 2016)' and that 'All the networks are optimized using Adam (Kingma & Ba, 2014) with a fixed learning rate of 0.00002.' However, it does not provide specific version numbers for TensorFlow or any other software libraries. |
| Experiment Setup | Yes | 4 EXPERIMENTS The implementation is based on Tensor Flow (Abadi et al., 2016). The network parameters are initialized from zero-mean Gaussian with standard deviation of 0.02. All the networks are optimized using Adam (Kingma & Ba, 2014) with a fixed learning rate of 0.00002. Detailed architectures can be found in the appendix. |