DäRF: Boosting Radiance Fields from Sparse Input Views with Monocular Depth Adaptation
Authors: Jiuhn Song, Seonghoon Park, Honggyu An, Seokju Cho, Min-Seop Kwak, Sungjin Cho, Seungryong Kim
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show our framework achieves state-of-the-art results both quantitatively and qualitatively, demonstrating consistent and reliable performance in both indoor and outdoor real-world datasets. We evaluate and compare our approach on real-world indoor and outdoor scene datasets, establishing new state-of-the-art results for the benchmarks. |
| Researcher Affiliation | Academia | Korea University |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and pre-trained weights will be made publicly available. |
| Open Datasets | Yes | Following previous works [37, 45], we use a subset of sparse-view Scan Net data [10] comprised with three indoor scenes... For outdoor reconstruction, we further test on 5 challenging scenes from the Tanks and Temples dataset [19]. |
| Dataset Splits | No | The paper mentions "training images" and "test images" (e.g., "18 to 20 training images and 8 test images") but does not explicitly provide details about a validation dataset split or how it was used for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions several software components like "K-planes [32] as Ne RF", "DPT-hybrid [34] as MDE model", and "Adam [18] as an optimizer," but it does not specify their version numbers or other ancillary software details needed for reproduction. |
| Experiment Setup | Yes | We use Adam [18] as an optimizer, with a learning rate of 1 10 2 for Ne RF and 1 10 5 for the MDE, along with a cosine warmup learning rate scheduling. |