Variational Multi-scale Representation for Estimating Uncertainty in 3D Gaussian Splatting
Authors: Ruiqi Li, Yiu-ming Cheung
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on various benchmark datasets show that the proposed method provides well-aligned calibration performance on estimated uncertainty and better rendering quality compared with the previous methods that enable uncertainty quantification with view synthesis. Besides, by leveraging the model parameter uncertainty estimated by our method, we can remove noisy Gaussians automatically, thereby obtaining a high-fidelity part of the reconstructed scene, which is of great help in improving the visual quality. |
| Researcher Affiliation | Academia | Ruiqi Li, Yiu-ming Cheung Department of Computer Science, Hong Kong Baptist University {csrqli, ymc}@comp.hkbu.edu.hk |
| Pseudocode | Yes | Algorithm 1 The pseudo-code of the training process of our uncertainty-aware 3DGS. |
| Open Source Code | Yes | Code is available at https://github.com/csrqli/variational-3dgs. |
| Open Datasets | Yes | We use three datasets for evaluation: i) LF dataset [49] contains in total 8 indoor and outdoor scenes, each containing over 100 images from 360 view. Following the same setting with CF-Ne RF [5], we use images from selected scenes torch, basket, africa, statue for evaluation. ii) LLFF dataset [34] contains 8 forward-facing and outdoor scenes, each containing 20 to 62 images where the camera positions are arranged in a grid pattern. iii) Mip-Ne RF 360 dataset [50] contains 6 outdoor scenes, in each scene, more than 200 images are captured in 360 view. |
| Dataset Splits | Yes | We train on all 8 scenes in the LLFF dataset and report the average results in Table 5. |
| Hardware Specification | Yes | The experiments are performed on a single NVIDIA A100 GPU. |
| Software Dependencies | No | The paper mentions using an AdamW optimizer, but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | Yes | We use an Adam W optimizer to update the learnable parameters of our variational multi-scale representation. The learning rate for 3DGS attributes is the same as the original algorithm [2]. We choose to spawn K = 10 finer Gaussians in the offset table to perform our multi-scale variational inference. The learning rate of the offset table is 0.1 times the learning rate of each attribute. |