DeBaRA: Denoising-Based 3D Room Arrangement Generation

Authors: Léopold Maillard, Nicolas Sereyjol-Garros, Tom Durand, Maks Ovsjanikov

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach through extensive experiments and demonstrate significant improvement upon state-of-the-art approaches in a range of scenarios.
Researcher Affiliation Collaboration Léopold Maillard1,2 Nicolas Sereyjol-Garros Tom Durand2 Maks Ovsjanikov1 1LIX, École Polytechnique, IP Paris 2Dassault Systèmes
Pseudocode Yes Algorithm 1 Self Score Evaluation
Open Source Code No Unfortunately, the code for De Ba RA cannot be disclosed due to author affiliation.
Open Datasets Yes Our experiments are conducted on the 3D-FRONT [10] synthetic indoor layouts, furnished with assets from 3D-FUTURE [11] that we use as the object retrieval database.
Dataset Splits No The paper states 'leading respectively to 2338/587 and 2071/516 train/test splits' but does not explicitly provide numerical details for a validation split for reproducibility.
Hardware Specification Yes All the training and evaluation experiments as well as the computation of generation times reported in Table 5 have been performed on a single NVIDIA RTX A6000 GPU.
Software Dependencies No While PyTorch and Meta Llama-3-8B are mentioned, specific version numbers for PyTorch and other key software components are not provided, preventing full reproducibility of software dependencies.
Experiment Setup Yes We trained our models separately on the 3D-FRONT [10] living room and dining room subsets for 3000 epochs, with a batch size of 32 and monitor the validation loss to avoid overfitting of the training set in the late iterations. We use the Adam W [29] optimizer with its Py Torch default parameters and learning rate η = 10 4, scheduled with a linear warmup phase for the first 50 epochs, starting at η 0.01. Following this, a cosine annealing schedule [28] reduces η to a minimum of 10 8 over 2200 epochs.