Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction
Authors: Qiancheng Fu, Qingshan Xu, Yew Soon Ong, Wenbing Tao
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our proposed method achieves high-quality surface reconstruction in both complex thin structures and large smooth regions, thus outperforming the state-of-the-arts by a large margin. |
| Researcher Affiliation | Academia | 1Huazhong University of Science and Technology 2Nanyang Technological University 3A*STAR, Singapore 1{fqc98,wenbingtao}@hust.edu.cn 2{qingshan.xu,asysong}@ntu.edu.sg |
| Pseudocode | No | The paper does not contain any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | No | We plan to release the code completely but have not yet with submission. |
| Open Datasets | Yes | Following previous practices [43, 35, 42], we reconstruct surfaces from 15 scans of DTU dataset [1] to evaluate our method. ... We also test on 7 challenging scenes from the low-res set of the Blended MVS dataset [41] (CC-4 License). |
| Dataset Splits | No | No explicit mention of training, validation, or test dataset splits (e.g., percentages or sample counts) for the DTU or Blended MVS datasets. The paper mentions training and testing on these datasets but not the specific splits used. |
| Hardware Specification | Yes | We train our model for 300k iterations for around 16 hours on a single NVIDIA RTX2080Ti GPU. |
| Software Dependencies | No | The paper does not explicitly state specific software dependencies with version numbers, such as deep learning frameworks or libraries. |
| Experiment Setup | Yes | Similar to [43, 35, 42], the SDF network is modeled by an 8-layer MLP with 256 hidden units and a skip connection in the middle. ... The radiance network is parameterized by a 4-layer MLP with 256 hidden units. ... We sample 512 rays per batch ... We train our model for 300k iterations ... In our experiments, we choose α, β and γ as 0.1, 1.0 and 0.5 respectively. |