3D Gaussian Rendering Can Be Sparser: Efficient Rendering via Learned Fragment Pruning

Authors: Zhifan Ye, Chenxi Wan, Chaojian Li, Jihoon Hong, Sixu Li, Leshu Li, Yongan Zhang, Yingyan (Celine) Lin

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments in both static and dynamic scenes validate the effectiveness of our approach.
Researcher Affiliation Academia Zhifan Ye, Chenxi Wan, Chaojian Li, Jihoon Hong, Sixu Li, Leshu Li, Yongan Zhang, Yingyan (Celine) Lin Georgia Institute of Technology {zye327, celine.lin}@gatech.edu
Pseudocode No No pseudocode or clearly labeled algorithm blocks were found.
Open Source Code Yes Our code is available at https://github.com/GATECH-EIC/Fragment-Pruning.
Open Datasets Yes For static scenes, we adopt the five outdoor scenes and four indoor scenes from the Mip-Ne RF 360 dataset [6], two scenes ( Train and Truck ) from the Tanks&Temples dataset [19] and two scenes ( Dr Johnson and Playroom ) from the Deep Blending dataset [22]. For dynamic scenes, we select the Plenoptic Video Dataset [39], which is composed of six real-world video sequences.
Dataset Splits No The paper uses standard datasets but does not explicitly provide the training/validation/test dataset splits with proportions or specific methodologies for all experiments. Table 1 mentions 'test set' for measuring rendering time, but doesn't define the split.
Hardware Specification Yes To validate the effectiveness of the proposed approach, we benchmark the rendering speed of our method and the baselines on a consumer hardware device, Nvidia s edge GPU, the Jetson Orin NX [17].
Software Dependencies No The paper mentions an 'Open GL-accelerated Gaussian Splatting renderer [38]' and uses 'Adam optimizer' and 'L1 Loss and SSIM Loss', but does not specify version numbers for Python, OpenGL, or any other software dependencies.
Experiment Setup Yes Specifically, we fine-tune each scene for 5,000 epochs, utilizing a batch size of 1. In particular, we adopt the Adam optimizer with a learning rate of 0.01, β1 = 0.9, and β2 = 0.99 during the fine-tuning process. We adopt the same L1 Loss and SSIM Loss as the pre-training process [16]. For dynamic scenes, we adjust our training batch size to 4, adhering to the default batch size as specified in the 4D Gaussian Splatting training [28].