From Transparent to Opaque: Rethinking Neural Implicit Surfaces with $\alpha$-NeuS

Authors: Haoran Zhang, Junkai Deng, Xuhui Chen, Fei Hou, Wencheng Wang, Hong Qin, Chen Qian, Ying He

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate our approach, we construct a benchmark that includes both real-world and synthetic scenes, demonstrating its practical utility and effectiveness. Our data and code are publicly available at https://github.com/728388808/alpha-Neu S. ... 4 Experiments
Researcher Affiliation Collaboration 1Key Laboratory of System Software (CAS) and State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences 2University of Chinese Academy of Sciences 3College of Computing and Data Science, Nanyang Technological University 4Department of Computer Science, Stony Brook University 5Sense Time Research
Pseudocode No The paper describes its method in text and equations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Our data and code are publicly available at https://github.com/728388808/alpha-Neu S.
Open Datasets Yes Our data and code are publicly available at https://github.com/728388808/alpha-Neu S. ... We thank the vibrant Blender community for heterogeneous models that drive our experiments. Specifically, we thank Rina Bothma for creating the bottle model, Joachim Bornemann for creating the case model, Rudolf Wohland for creating the jar model, and Eleanie for creating the strawberry model which is put inside the jar. These models are released under a royalty-free license described at https://www.blenderkit.com/docs/licenses/.
Dataset Splits No The synthetic dataset comprises 100 training images from different viewpoints. However, the paper does not specify how these images are split into training, validation, and test sets, nor does it provide details for real-world scenes.
Hardware Specification Yes The training process of Neu S typically takes about 9.75 hours and DCUDF convergence only requires a few minutes on a single NVIDIA A100 GPU.
Software Dependencies No The paper mentions using components from Neu S and DCUDF and the Vector Adam optimizer, but it does not specify version numbers for any software dependencies like Python, PyTorch, or other libraries.
Experiment Setup Yes Implementation details. Our training structure is the same as Neu S. We also followed the recommended configuration for the synthetic dataset by the authors of Neu S, without changing the loss functions or their respective weights. That is, we chose λ1 = 1.0 for color loss and λ2 = 0.1 for Eikonal loss. ... To extract the unbiased surface through DCUDF [11], we choose to use 0.005 as the threshold for synthetic scenes, and 0.002 or 0.005 for real-world scenes. ... We performed 300 epochs for step 1 and 100 epochs for step 2 respectively, which is the default setting of DCUDF. ... We set the weights λ1 = 500 and λ2 = 0.5, which are different from DCUDF default setting.