A Layer-Based Sequential Framework for Scene Generation with GANs
Authors: Mehmet Ozgur Turkoglu, William Thong, Luuk Spreeuwers, Berkay Kicanaoglu8901-8908
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Via quantitative and qualitative experiments on a subset of the MS-COCO dataset, we show that our proposed framework produces not only more diverse images but also copes better with affine transformations and occlusion artifacts of foreground objects than its counterparts. |
| Researcher Affiliation | Academia | Mehmet Ozgur Turkoglu,1 William Thong,2 Luuk Spreeuwers,1 Berkay Kicanaoglu2 1University of Twente, 2University of Amsterdam |
| Pseudocode | No | The paper describes the model architecture and processes using text and figures, but no structured pseudocode or algorithm blocks are provided. |
| Open Source Code | Yes | The code is available at https: //github.com/0zgur0/Seq Scene Gen. |
| Open Datasets | Yes | MS-COCO dataset (Lin et al. 2014) is used to evaluate the performance of our proposed model. |
| Dataset Splits | Yes | The dataset contains 164K training images over 80 semantic classes. Additionally, we also compute the mean Io U scores on the images generated conditioned on semantic maps of the validation set (450 images). |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models or memory) are provided for running the experiments. |
| Software Dependencies | No | The paper mentions optimizers and network architectures but does not specify software dependencies like programming languages, libraries, or frameworks with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | Parameters are updated with the Adam optimizer (β1 = 0, β2 = 0.9, learning rate of 2e 4 and divided by 2 every 80 epochs) (Kingma and Ba 2014). All the models are trained for 480 epochs with a batch size of 16. The parameters of the generators are updated after 5 updates of the discriminator. The tradeoff hyper-parameters in the foreground generator loss function (Eq. 8) are set to λl = 0.1, λr = 1e 5, λfm = 1 and in the background generator loss function (Eq. 3) to λr = 100, λfm = 1. |