Normal-GS: 3D Gaussian Splatting with Normal-Involved Rendering

Authors: Meng Wei, Qianyi Wu, Jianmin Zheng, Hamid Rezatofighi, Jianfei Cai

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that Normal-GS achieves near state-of-the-art visual quality while obtaining accurate surface normals and preserving real-time rendering performance.
Researcher Affiliation Academia 1Monash Univeristy 2Nanyang Technological University {meng.wei,qianyi.wu,hamid.rezatofighi,jianfei.cai}@monash.edu {ASJMZheng}@ntu.edu.sg
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No We will release our code after publication.
Open Datasets Yes We followed the original 3DGS [2] methodology and used the Ne RF Synthetic [1], Mip-Ne RF 360 [26], Tank and Temple [54], and Deep Blending [55] datasets to demonstrate the performance of our method.
Dataset Splits No The paper mentions training and testing splits: 'we selected every 8th image for testing and used the remaining images for training,' but does not explicitly describe a separate validation split.
Hardware Specification Yes We tested our method and the baseline methods using their original released implementations with default hyperparameters on an NVIDIA RTX 3090 GPU with 24 GB of memory.
Software Dependencies No We implemented our method in Python using the Py Torch framework [56]. Specific version numbers for Python, PyTorch, or other libraries are not provided.
Experiment Setup Yes We trained our models for 30k iterations, following the settings of baseline methods. Consistent with [2, 6], we set λvol = 0.001. For the depth-normal loss, we used λN = 0.01. Because the depth and normals were inaccurate at the start of training, we added the depth-normal loss after training for 5k iterations. ... Our loss is defined as L = LP +λvol Lvol +λN LN , with λN = 0.01 and λvol = 0.001. The photometric loss, as defined in [2], is Lp = (1 λSSIM)L1 +λSSIMLD-SSIM, with λSSIM = 0.2.