Compressible-composable NeRF via Rank-residual Decomposition
Authors: Jiaxiang Tang, Xiaokang Chen, Jingbo Wang, Gang Zeng
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that our method is able to achieve comparable rendering quality to state-of-the-art methods, while enabling extra capability of compression and composition. Code is available at https://github.com/ashawkey/CCNe RF. |
| Researcher Affiliation | Academia | Jiaxiang Tang1, Xiaokang Chen1, Jingbo Wang2, Gang Zeng1,3 1School of Intelligence Science and Technology, Peking University 2Chinese University of Hong Kong 3Intelligent Terminal Key Laboratory of Si Chuan Province {tjx, pkucxk}@pku.edu.cn, wj020@ie.cuhk.edu.hk, zeng@pku.edu.cn |
| Pseudocode | No | The paper describes its methodology in narrative form with mathematical equations, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/ashawkey/CCNe RF. |
| Open Datasets | Yes | We mainly carry out experiments on the Ne RF-synthetic dataset [26] (CC BY 3.0 license) and the Tanks and Temples dataset [17] (CC BY-NC-SA 3.0 license). |
| Dataset Splits | No | The paper mentions using the NeRF-synthetic and Tanks and Temples datasets, but does not explicitly provide specific percentages, sample counts, or detailed methodology for training, validation, and test splits. |
| Hardware Specification | Yes | All the experiments are performed on one NVIDIA V100 GPU. |
| Software Dependencies | No | The model is implemented with the Py Torch framework [32]. However, a specific version number for PyTorch or other software dependencies is not provided. |
| Experiment Setup | Yes | We use the Adam optimizer [16] with an initial learning rate of 0.02 for the factorized matrices, and 0.001 for the singular values. |