GaussianPro: 3D Gaussian Splatting with Progressive Propagation

Authors: Kai Cheng, Xiaoxiao Long, Kaizhi Yang, Yao Yao, Wei Yin, Yuexin Ma, Wenping Wang, Xuejin Chen

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both large-scale and small-scale scenes validate the effectiveness of our method. Our method significantly surpasses 3DGS on the Waymo dataset, exhibiting an improvement of 1.15d B in terms of PSNR.
Researcher Affiliation Academia 1Mo E Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China 2The University of Hong Kong 3Nanjing University 4The University of Adelaide 5Shanghai Tech University 6Texas A&M University.
Pseudocode No The paper describes the progressive Gaussian propagation strategy and its steps (e.g., in Section 4.2 and Figure 2), but it does not include a formally labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Codes and data are available at https://github. com/kcheng1021/Gaussian Pro.
Open Datasets Yes We conduct our experiments in a large-scale urban dataset Waymo (Sun et al., 2020), and the common Ne RF benchmark Mip-Ne RF360 dataset. (Caesar et al., 2020).
Dataset Splits No To evaluate the performance of novel view synthesis, following the common settings, we select one of every eight images as testing images and the remaining ones as training data.
Hardware Specification Yes All experiments are conducted on an RTX 3090 GPU.
Software Dependencies No Our method is built upon the popular open-source 3DGS code base (Kerbl et al., 2023). For outdoor datasets like Waymo, we use Segformer (Xie et al., 2021) to segment the sky region.
Experiment Setup Yes In alignment with the approach described in 3DGS, our models are trained for 30,000 iterations across all scenes following 3DGS s training schedule and hyperparameters. Besides the original clone and split Gaussian densification strategies used in 3DGS, we additionally perform our proposed progressive propagation strategy every 50 training iteration where propagation is performed 3 times. The threshold σ of the absolute relative difference is set to 0.8. For the planar loss, we set β = 0.001 and γ = 100.