DisC-GS: Discontinuity-aware Gaussian Splatting

Authors: Haoxuan Qu, Zhuoling Li, Hossein Rahmani, Yujun Cai, Jun Liu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments Datasets. To evaluate the efficacy of our proposed framework Dis C-GS, following previous Gaussian Splatting works [30, 49], we evaluate our framework on a total of 13 3D scenes, which include both outdoor scenes and indoor scenes. Specifically, among these 13 scenes, 9 of them are from the Mip-Ne RF360 dataset [4], 2 of them are from the Tanks&Temples dataset [31], and 2 of them are from the Deep Blending dataset [23]. We also follow previous works [30, 49] in their train-test-split. Evaluation metrics. Following [30, 49], we use the following three metrics for evaluation: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS) [51]. Implementation details. We conduct our experiments on an RTX 3090 GPU and develop our code mainly based on the Git Hub repository [2] provided by Kerbl et al [30]. Moreover, we also get inspired by [35, 52, 46] during our code implementation, and make use of the LPIPS loss during our training process. Furthermore, for the newly introduced attribute ccurve R4M 2, we set its initial learning rate to 2e-4, and set the hyperparameter M to 3. Besides, in the densification procedure of our framework, when a Gaussian is cloned/splitted into two new Gaussians, we assign both the new Gaussians with the same attribute ccurve as the original one.
Researcher Affiliation Academia Haoxuan Qu Lancaster University U.K. h.qu5@lancaster.ac.uk Zhuoling Li * Lancaster University U.K. z.li81@lancaster.ac.uk Hossein Rahmani Lancaster University U.K. h.rahmani@lancaster.ac.uk Yujun Cai University of Queensland Australia vanora.caiyj@gmail.com Jun Liu Lancaster University U.K. j.liu81@lancaster.ac.uk
Pseudocode No The paper describes its methods through textual descriptions and mathematical equations but does not include any structured pseudocode or algorithm blocks.
Open Source Code No At this submission stage, we are sorry that we do not get enough approval to open-source our code.
Open Datasets Yes To evaluate the efficacy of our proposed framework Dis C-GS, following previous Gaussian Splatting works [30, 49], we evaluate our framework on a total of 13 3D scenes, which include both outdoor scenes and indoor scenes. Specifically, among these 13 scenes, 9 of them are from the Mip-Ne RF360 dataset [4], 2 of them are from the Tanks&Temples dataset [31], and 2 of them are from the Deep Blending dataset [23].
Dataset Splits No We also follow previous works [30, 49] in their train-test-split.
Hardware Specification Yes We conduct our experiments on an RTX 3090 GPU and develop our code mainly based on the Git Hub repository [2] provided by Kerbl et al [30].
Software Dependencies No The paper mentions developing code based on a GitHub repository [2] provided by Kerbl et al. [30] but does not specify exact software versions for dependencies like Python, PyTorch, or CUDA.
Experiment Setup Yes Furthermore, for the newly introduced attribute ccurve R4M 2, we set its initial learning rate to 2e-4, and set the hyperparameter M to 3.