NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction

Authors: Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, Wenping Wang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the DTU dataset and the Blended MVS dataset show that Neu S outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion.
Researcher Affiliation Academia The University of Hong Kong Max Planck Institute for Informatics Texas A&M University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes To evaluate our approach and baseline methods, we use 15 scenes from the DTU dataset [11]... We further tested on 7 challenging scenes from the low-res set of the Blended MVS dataset [45](CC-4 License).
Dataset Splits No The paper mentions training models for a certain number of iterations but does not explicitly specify a validation dataset split or how validation was performed.
Hardware Specification Yes We sample 512 rays per batch and train our model for 300k iterations for 14 hours (for the w/ mask setting) and 16 hours (for the w/o mask setting) on a single NVIDIA RTX2080Ti GPU.
Software Dependencies No The paper does not provide specific software names with version numbers (e.g., Python, PyTorch, CUDA versions) used for the experiments.
Experiment Setup Yes We sample 512 rays per batch and train our model for 300k iterations for 14 hours (for the w/ mask setting) and 16 hours (for the w/o mask setting) on a single NVIDIA RTX2080Ti GPU.