GSDF: 3DGS Meets SDF for Improved Neural Rendering and Reconstruction
Authors: Mulin Yu, Tao Lu, Linning Xu, Lihan Jiang, Yuanbo Xiangli, Bo Dai
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our method boosts the SDF optimization process to reconstruct more detailed geometry, while reducing floaters and blurry edge artifacts in rendering by aligning Gaussian primitives with the underlying geometry.Extensive experiments demonstrate that our dual-branch design allows: 1) The GS-branch to generate structured primitives closely aligned with the surface, reducing floaters and improving detail and edge quality in view synthesis. 2) Accelerated convergence in the SDF-branch, resulting in superior geometric accuracy and enhanced surface details. |
| Researcher Affiliation | Collaboration | Mulin Yu 1 Tao Lu1 Linning Xu2 Lihan Jiang4,1 Yuanbo Xiangli3 Bo Dai5,1 1Shanghai Artificial Intelligence Laboratory, 2The Chinese University of Hong Kong, 3Cornell University, 4 University of Science and Technology of China, 5 The University of Hong Kong |
| Pseudocode | No | The paper describes the methods in text and figures but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | We will release the code after acceptance. |
| Open Datasets | Yes | Datasets. We evaluated results using 26 real-world and synthetic scenes from various datasets: 7 from Mip-Ne RF360 [3], 2 from Deep Blending [12], 2 from Tanks&Temples [18], and 15 from DTU [14], featuring a wide range of indoor, outdoor, and object-centric scenarios. |
| Dataset Splits | No | For all datasets, we used 1/8 of the images as test sets and the other 7/8 as training sets. |
| Hardware Specification | Yes | All experiments were conducted on a single NVIDIA A100 GPU with 80G memory. |
| Software Dependencies | No | The paper mentions implementing the model based on Scaffold-GS [21] and NeuS [31] with hash-encoding [11], but it does not specify explicit version numbers for software dependencies like Python, PyTorch, or CUDA libraries. |
| Experiment Setup | Yes | Implementation Details. We implemented our dual-branch model based on 1) Scaffold-GS [21] and 2) an enhanced version of Neu S [31] with a hash-grid variant [11], following the practice of [19]. The hash grid resolution spans from 25 to 211 with 16 levels, each entry having a feature dimension of 4 and a maximum of 221 entries per level. The coarsest 4 layers were activated initially for the DTU [1], and 8 layers for other datasets, with finer levels added every 2k iterations. We trained the GS-branch for 15k iterations, followed by joint training of both branches for 30k iterations. The SDF-branch was warmed up for 2k iterations on the DTU and 5k on other datasets without depth-guided ray sampling.A.1.1 Configurations. The variance σ2 for the Gaussian function in Eq. 3 and 4 is set to 0.005. For the rendering loss discussed in Sec. 3.3, we set λ1 = 0.2 and λvol = 0.01... For the SDF-branch, we set λeik = 0.1 and implement an adaptive scheme for λcurv... For the mutual geometry loss, we typically assign λd = 0.5 and λn = 0.01. |