Incremental Scene Synthesis

Authors: Benjamin Planche, Xuejian Rong, Ziyan Wu, Srikrishna Karanam, Harald Kosch, YingLi Tian, Jan Ernst, ANDREAS HUTTER

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate efficacy on various 2D as well as 3D data. We demonstrate our solution on various synthetic and real 2D and 3D environments. Table 1: Quantitative comparison on 2D and 3D scenes... Table 2: Ablation study on Celeb A...
Researcher Affiliation Collaboration 1Siemens Corporate Technology, Munich, Germany 2University of Passau, Passau, Germany 3The City College, City University of New York, New York NY 4Siemens Corporate Technology, Princeton NJ
Pseudocode No No pseudocode or algorithm block is present.
Open Source Code No No statement is made about the availability of source code, nor is a link provided.
Open Datasets Yes We use a synthetic dataset of indoor 83 83 floor plans rendered using the Ho ME platform [2] and SUNCG data [20] (8,640 training + 2,240 test images from random rooms office', living', and bedroom'). Similar to Fraccaro et al. [8], we also consider an agent exploring real pictures from the Celeb A dataset [13]... As a first 3D experiment, we recorded, with the Vizdoom platform [27]... We then consider the Active Vision Dataset (AVD) [1]...
Dataset Splits Yes 8,640 training + 2,240 test images from random rooms... 34 training and 6 testing episodes... We selected 15 [scenes] for training and 4 for testing as suggested by the dataset authors...
Hardware Specification Yes Note that on a Nvidia Titan X, the whole process (registering 5 views, localizing the agent, recalling the 5 images, and generating 5 new ones) takes less than 1s.
Software Dependencies No The paper mentions platforms like 'Vizdoom' but does not specify version numbers for any software dependencies like programming languages, libraries, or frameworks.
Experiment Setup No The paper describes experimental datasets and agent characteristics but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or other detailed training configurations.