Ced-NeRF: A Compact and Efficient Method for Dynamic Neural Radiance Fields
Authors: Youtian Lin
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluation of dynamic scene datasets shows that our Ced-Ne RF achieves fast rendering speeds while maintaining high-quality rendering results. Our method outperforms the current state-of-the-art methods in terms of quality, training and rendering speed. We evaluate our approach on dynamic scenes captured with a monocular camera as well as in a multi-camera setting. Our results demonstrate that our approach achieves fast training while maintaining high-quality rendering results compared to state-of-the-art methods. |
| Researcher Affiliation | Academia | Youtian Lin Nanjing University Harbin Institute of Technology linyoutian.loyot@gmail.com |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Ar Xiv version with supplementary materials is available at https://github.com/Linyou/Ced-Ne RF |
| Open Datasets | Yes | We first evaluate our method on the DNe RF dataset (Pumarola et al. 2021)... We then present a comprehensive analysis comparing our method with state-of-the-art techniques on the Hyper Ne RF dataset (Park et al. 2021b)... We further compare our method with state-of-the-art methods on the Plenoptic Video dataset (Li et al. 2022)... |
| Dataset Splits | Yes | To ensure a fair comparison, we downsampled images to 540 960 in our experiments and followed the training and validation camera split provided by (Park et al. 2021b). |
| Hardware Specification | Yes | For a fair comparison, we conduct all experiments on a single RTX 3090 GPU. |
| Software Dependencies | No | Our framework is built on top of Nerfacc (Li, Tancik, and Kanazawa 2022), and is implemented in Py Torch (Paszke et al. 2019). The paper mentions software but does not provide specific version numbers for the key dependencies. |
| Experiment Setup | Yes | We train our model using the Adam (Kingma and Ba 2015) optimizer with a learning rate of 0.01, and decay the learning rate to 0.0003 every 1k steps. We train the model for 20K steps in synthetic scenes and 40K steps in real-world scenes. For better performance, we set the step size of the deformation, α, to 0.0001, the attenuation strength, λ, to 60, and the loss hyper-parameter, ξ, to 0.001 for synthetic scenes and 1 for real-world scenes. |