MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction

Authors: Zehao Yu, Songyou Peng, Michael Niemeyer, Torsten Sattler, Andreas Geiger

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on multiple challenging datasets, ranging from object-level reconstruction on the DTU dataset [1], over room-level reconstruction on Replica [61] and Scan Net [12], to large-scale indoor scene reconstruction on Tanks and Temples [30].
Researcher Affiliation Academia 1University of Tübingen 2ETH Zurich 3MPI for Intelligent Systems, Tübingen 4Czech Technical University in Prague
Pseudocode No No explicit pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes https://niujinshuchong.github.io/monosdf" and "Code and data are released.
Open Datasets Yes Datasets. While previous neural implicit-based reconstruction methods mainly focused on singleobject scenes with many input views, in this work, we investigate the importance of monocular geometric cues for scaling to more complex scenes. Thus we consider: a) Real-world indoor scans: Replica [61] and Scan Net [12]; b) Real-world large-scale indoor scenes: Tanks and Temples [30] advanced scenes; c) Object-level scenes: DTU [1] in the sparse 3-view setting from [40,76].
Dataset Splits Yes Datasets. While previous neural implicit-based reconstruction methods mainly focused on singleobject scenes with many input views, in this work, we investigate the importance of monocular geometric cues for scaling to more complex scenes. Thus we consider: a) Real-world indoor scans: Replica [61] and Scan Net [12]; b) Real-world large-scale indoor scenes: Tanks and Temples [30] advanced scenes; c) Object-level scenes: DTU [1] in the sparse 3-view setting from [40,76].
Hardware Specification No No specific hardware details (e.g., exact GPU/CPU models, processor types) are provided in the main text. The paper mentions in its checklist that 'We describe details of our computational resources in supplementary material.'
Software Dependencies No The paper mentions 'Py Torch [46]' but does not provide a specific version number for it or any other software dependency.
Experiment Setup Yes Implementation Details. We implement our method in Py Torch [46] and use the Adam optimizer [29] with a learning rate of 5e-4 for neural networks and 1e-2 for feature grids and dense SDF grids. We set λ1, λ2, λ3 to 0.1, 0.1, 0.05, respectively. We sample 1024 rays per iteration and apply the error-bounded sampling strategy introduced by [74] to sample points along each ray.