DM-NeRF: 3D Scene Geometry Decomposition and Manipulation from 2D Images

Authors: Bing WANG, Lu Chen, Bo Yang

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three datasets clearly show that our method can accurately decompose all 3D objects from 2D views, allowing any interested object to be freely manipulated in 3D space such as translation, rotation, size adjustment, and deformation. 4 EXPERIMENTS 4.1 DATASETS 4.2 BASELINE AND METRICS 4.3 3D SCENE DECOMPOSITION 4.4 3D OBJECT MANIPULATION 4.5 ABLATION STUDY
Researcher Affiliation Academia Bing Wang1,2,3 Lu Chen1,2 Bo Yang1,2 1 Shenzhen Research Institute, The Hong Kong Polytechnic University 2 v LAR Group, The Hong Kong Polytechnic University 3University of Oxford bingwang@polyu.edu.hk bo.yang@polyu.edu.hk
Pseudocode Yes Algorithm 1 Our Inverse Query Algorithm to manipulate the learned implicit fields.
Open Source Code Yes Our code and dataset are available at https://github.com/v LAR-group/DM-Ne RF
Open Datasets Yes DM-SR: ... we create a synthetic dataset with 8 different and complex indoor rooms, called DM-SR. ... Our code and dataset are available at https://github.com/v LAR-group/DM-Ne RF. Replica: Replica (Straub et al., 2019) is a reconstruction-based 3D dataset of high fidelity scenes. ... Scan Net: Scan Net (Dai et al., 2017) is a large-scale challenging real-world dataset.
Dataset Splits No The paper provides training and testing split information for the datasets (e.g., "300 views for training" and "100 views for testing" for DM-SR), but it does not mention a separate validation set split.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions "Detectron2 Library" but does not provide specific version numbers for any software dependencies, such as Python, PyTorch, or CUDA.
Experiment Setup Yes The single hyper-parameter for our object field d is set as 0.05 meters in all experiments. ...we carefully fine-tune both models using up to 480 epochs until convergence with learning rate 5e 4 and then pick up the best models on the testing split of each scene for comparison.