Rad-NeRF: Ray-decoupled Training of Neural Radiance Field
Authors: Lidong Guo, Xuefei Ning, Yonggan Fu, Tianchen Zhao, Zhuoliang Kang, Jincheng Yu, Yingyan (Celine) Lin, Yu Wang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on five datasets demonstrate that Rad-Ne RF can enhance the rendering performance across a wide range of scene types compared with existing single-Ne RF and multi-Ne RF methods. |
| Researcher Affiliation | Collaboration | Lidong Guo1 Xuefei Ning1 Yonggan Fu2 Tianchen Zhao1 Zhuoliang Kang3 Jincheng Yu1 Yingyan (Celine) Lin2 Yu Wang1 1Tsinghua University 2Georgia Institute of Technology 3Meituan |
| Pseudocode | No | The paper describes methods and equations but does not include any block labeled pseudocode or algorithm. |
| Open Source Code | Yes | Code is available at https://github.com/thu-nics/Rad-Ne RF. |
| Open Datasets | Yes | We use five datasets from different types of scenes to evaluate our Rad-Ne RF. (1) Object dataset: we take Masked Tanks-And-Temples dataset (Mask TAT) [13] for evaluation... (2) 360-degree inward/outward-facing datasets: we take Tanks-And-Temples (TAT) dataset with unmasked background [13] and Ne RF-360-v2 dataset [2] to evaluate... (3) free shooting-trajectory datasets: we conduct experiments on Free-Dataset [30] and Scan Net dataset [6]... |
| Dataset Splits | No | The paper uses various datasets for evaluation but does not explicitly provide the training, validation, and test splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | Yes | We train the Ne RFs for 20k iterations on a single RTX-3090 GPU. |
| Software Dependencies | No | Our Rad-Ne RF is built upon Instant-NGP [18] using a third-party Py Torch implementation 3 and costs no more than one hour of training. |
| Experiment Setup | Yes | For Instance-NGP and our Rad-Ne RF, we train the Ne RFs for 20k iterations on a single RTX-3090 GPU. We use Adam optimizer with a batch size of 8192 rays and a learning rate decaying from 1 10 2 to 3 10 4. For the weights of the regularization terms in Equation 6, λ1 is set to 1 10 4 on Ne RF-360-v2 and Free dataset, and is set to 5 10 3 on other datasets. We set λ2 to 1 10 2 on all the datasets. |