Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
gsplat: An Open-Source Library for Gaussian Splatting
Authors: Vickie Ye, Ruilong Li, Justin Kerr, Matias Turkulainen, Brent Yi, Zhuoyang Pan, Otto Seiskari, Jianbo Ye, Jeffrey Hu, Matthew Tancik, Angjoo Kanazawa
JMLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that gsplat achieves up to 10% less training time and 4 less memory than the original Kerbl et al. (2023) implementation. We compare the training performance and efficiency of gsplat training against the original implementation by Kerbl et al. on the Mip Ne RF360 dataset (Barron et al., 2022). ... We report average results on novel-view synthesis, memory usage, and training time using an A100 GPU (Py Torch v2.1.2 and cudatoolkit v11.8) at 7k and 30k training iterations in Table 1. |
| Researcher Affiliation | Collaboration | 1 UC Berkeley 2 Aalto University 3 Shanghai Tech University 4 Spectacular AI 5 Amazon 6 Luma AI |
| Pseudocode | Yes | Figure 1: Implementation of the main 3D Gaussian rendering process using the gsplat (v1.3.0) library with only 13 lines of code. A single Gaussian is initialized (left codeblock) and rendered as an RGB image (right). Figure 2: Code-block for training a Gaussian model with a chosen densification strategy. |
| Open Source Code | Yes | gsplat is an open-source library designed for training and developing Gaussian Splatting methods. ... Source code is available at https://github.com/nerfstudio-project/gsplat under Apache License 2.0. |
| Open Datasets | Yes | We compare the training performance and efficiency of gsplat training against the original implementation by Kerbl et al. on the Mip Ne RF360 dataset (Barron et al., 2022). |
| Dataset Splits | Yes | We compare the training performance and efficiency of gsplat training against the original implementation by Kerbl et al. on the Mip Ne RF360 dataset (Barron et al., 2022). ... Results are averaged over 7 scenes. |
| Hardware Specification | Yes | We report average results on novel-view synthesis, memory usage, and training time using an A100 GPU (Py Torch v2.1.2 and cudatoolkit v11.8) at 7k and 30k training iterations in Table 1. |
| Software Dependencies | Yes | using an A100 GPU (Py Torch v2.1.2 and cudatoolkit v11.8) |
| Experiment Setup | Yes | We use the standard ADC densification strategy and equivalent configuration settings for both. ... at 7k and 30k training iterations in Table 1. ... where s is set as a hyper-parameter during training, default is 0.3, to ensure that a 2D Gaussian s size spans the width of a single pixel. ... If the accumulated positional gradients for a primitive exceed a user set threshold T (default is 0.0002), a Gaussian is either split or cloned. ... Gaussians with opacity values below a threshold (set at 0.005) are removed at fixed intervals during training. |