Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting
Authors: Ziyi Yang, Xinyu Gao, Yang-Tian Sun, Yihua Huang, Xiaoyang Lyu, Wen Zhou, Shaohui Jiao, Xiaojuan Qi, Xiaogang Jin
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results demonstrate that our method surpasses existing approaches in terms of rendering quality. In this section, we present both quantitative and qualitative results of our method. |
| Researcher Affiliation | Collaboration | 1State Key Lab of CAD&CG, Zhejiang University 2The University of Hong Kong 3Byte Dance Inc. |
| Pseudocode | No | The paper describes its methods but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our codes and datasets are available at https://ingra14m.github.io/Spec-Gaussian-website. |
| Open Datasets | Yes | We used the Ne RF, NSVF, and our "Anisotropic Synthetic" datasets as the experimental datasets for synthetic scenes. Our codes and datasets are available at https://ingra14m.github.io/Spec-Gaussian-website. |
| Dataset Splits | Yes | We used the Ne RF, NSVF, and our "Anisotropic Synthetic" datasets as the experimental datasets for synthetic scenes. We used the Mip360 [2] dataset, which contains indoor scenes with specular highlights. |
| Hardware Specification | Yes | All experiments were conducted on an NVIDIA RTX 3090. |
| Software Dependencies | No | The paper states "We implemented our framework using Py Torch [42]" but does not provide a specific version number for PyTorch or other key software dependencies. |
| Experiment Setup | Yes | For the ASG appearance field, the decoupling MLP Ψ consists of 3 layers, each with 64 hidden units, and the positional encoding for the view direction is of order 2. Regarding coarse-to-fine training, which is applied only to real-world scenes to remove floaters, we start with a resolution rs that is 4x downsampled. [...] We optimize the learnable parameters and MLPs using the same loss function as 3D-GS [23]. The total supervision is given by: L = (1 − λD-SSIM)L1 + λD-SSIMLD-SSIM, where the λD-SSIM = 0.2 is consistently used in our experiments. |