Effective Rank Analysis and Regularization for Enhanced 3D Gaussian Splatting
Authors: Junha Hyung, Susung Hong, Sungwon Hwang, Jaeseong Lee, Jaegul Choo, Jin-Hwa Kim
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the effective rank regularization, comparing its performance as an add-on to baseline models. Additionally, we analyze the contributions of different components of the method. ... Table 1 presents the quantitative results of geometry reconstruction on the DTU dataset. We report the Chamfer distance for each scene, along with the mean Chamfer |
| Researcher Affiliation | Collaboration | Junha Hyung1 Susung Hong4 Sungwon Hwang1 Jaeseong Lee1 Jaegul Choo1 Jin-Hwa Kim2,3 1KAIST 2NAVER AI Lab 3SNU AIIS 4Korea University |
| Pseudocode | No | The paper does not contain a pseudocode block or a clearly labeled algorithm block. |
| Open Source Code | Yes | The project page is available at https://junhahyung.github.io/erankgs.github.io/. |
| Open Datasets | Yes | We evaluate our model on the DTU [14] and Mip-Ne RF360 [2] datasets. |
| Dataset Splits | No | For novel view synthesis, the images are split into training and test sets, while the entire set of images is used for geometry reconstruction. |
| Hardware Specification | Yes | All experiments are conducted on a Tesla V100 GPU. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies used in the experiments. It only mentions the NSML platform, but no specific libraries or their versions. |
| Experiment Setup | Yes | The regularization hyperparameter λerank = 0.01 is used for all training. |