Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Compressible-composable NeRF via Rank-residual Decomposition
Authors: Jiaxiang Tang, Xiaokang Chen, Jingbo Wang, Gang Zeng
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that our method is able to achieve comparable rendering quality to state-of-the-art methods, while enabling extra capability of compression and composition. Code is available at https://github.com/ashawkey/CCNe RF. |
| Researcher Affiliation | Academia | Jiaxiang Tang1, Xiaokang Chen1, Jingbo Wang2, Gang Zeng1,3 1School of Intelligence Science and Technology, Peking University 2Chinese University of Hong Kong 3Intelligent Terminal Key Laboratory of Si Chuan Province EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes its methodology in narrative form with mathematical equations, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/ashawkey/CCNe RF. |
| Open Datasets | Yes | We mainly carry out experiments on the Ne RF-synthetic dataset [26] (CC BY 3.0 license) and the Tanks and Temples dataset [17] (CC BY-NC-SA 3.0 license). |
| Dataset Splits | No | The paper mentions using the NeRF-synthetic and Tanks and Temples datasets, but does not explicitly provide specific percentages, sample counts, or detailed methodology for training, validation, and test splits. |
| Hardware Specification | Yes | All the experiments are performed on one NVIDIA V100 GPU. |
| Software Dependencies | No | The model is implemented with the Py Torch framework [32]. However, a specific version number for PyTorch or other software dependencies is not provided. |
| Experiment Setup | Yes | We use the Adam optimizer [16] with an initial learning rate of 0.02 for the factorized matrices, and 0.001 for the singular values. |