3D Gaussian Splatting as Markov Chain Monte Carlo
Authors: Shakiba Kheradmand, Daniel Rebain, Gopal Sharma, Weiwei Sun, Yang-Che Tseng, Hossam Isack, Abhishek Kar, Andrea Tagliasacchi, Kwang Moo Yi
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on standard scenes evaluated in [19] (Ne RF Synthetic [28], Mip Ne RF 360 [2], Tank & Temples [22], Deep Blending [16]), as well as the OMMO [27] dataset that exhibit large scene context. ... We use various datasets, both synthetic and real. ... Metrics. We evaluate each method using three standard metrics: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Metric (SSIM) [38], and Learned Perceptual Image Patch Similarity (LPIPS) [50]. |
| Researcher Affiliation | Collaboration | Shakiba Kheradmand1, Daniel Rebain1, Gopal Sharma1, Weiwei Sun1, Yang-Che Tseng1, Hossam Isack2, Abhishek Kar2, Andrea Tagliasacchi3, 4, 5, Kwang Moo Yi1 1University of British Columbia, 2Google Research, 3Google Deep Mind, 4Simon Fraser University, 5University of Toronto |
| Pseudocode | No | The paper describes methods through text and mathematical equations, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Code is public at the project page. |
| Open Datasets | Yes | We use various datasets, both synthetic and real. Specifically, as in 3DGS [19], we use all scenes from Ne RF Synthetic [28] dataset, the same two scenes used in [19] of Tank & Temples [22], and Deep Blending [16] and all publicly available scenes from Mip Ne RF 360 [2]. We further use all scenes from the OMMO [27] dataset as in [10]. ... Ne RF Synthetic [28]: made available under Creative Commons Attribution 3.0 License. Available at https://www.matthewtancik.com/nerf. Mip-Ne RF 360 [2]: no license terms provided. Available at https://jonbarron.info/ mipnerf360/. OMMO [27]: no license terms provided. Available at https://ommo.luchongshan. com/. Deep Blending [16]: no license terms provided. Available at http://visual.cs.ucl.ac. uk/pubs/deepblending/. Tank & Temples [22]: made available under Creative Commons Attribution Non Commercial-Share Alike 3.0 License https://www.tanksandtemples.org/ license/. |
| Dataset Splits | No | The paper mentions using datasets for training and testing, but does not specify explicit train/validation/test dataset splits with percentages, absolute sample counts, or specific predefined split citations for a validation set. |
| Hardware Specification | No | The paper mentions that 3DGS allows '1080p images to be rendered at 130 frames per second on modern GPUs', and compares 'single optimization iteration time' and 'runtime' but does not specify the exact GPU or CPU models, or any other detailed hardware specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Py Torch [33]' as an implementation framework but does not specify a version number for PyTorch or any other software dependencies with version numbers. |
| Experiment Setup | Yes | For all experiments, unless specified otherwise, we use λnoise=5 105, λΣ=0.01, and λo=0.01. For Deep Blending [16], we use λo=0.001. Following 3DGS [19], we start with 500 warmup iterations, during which we do not perform our relocalization in Sec. 3.4 nor increase the number of Gaussians." and "For the location of Gaussians, we start at a learning rate of 1.6e 4 and decay it exponentially to 1.6e 6. |