CL-NeRF: Continual Learning of Neural Radiance Fields for Evolving Scene Representation
Authors: Xiuzhe Wu, Peng Dai, Weipeng DENG, Handi Chen, Yang Wu, Yan-Pei Cao, Ying Shan, Xiaojuan Qi
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments demonstrate that CL-Ne RF can synthesize high-quality novel views of both changed and unchanged regions with high training efficiency, surpassing existing methods in terms of reducing forgetting and adapting to changes. |
| Researcher Affiliation | Collaboration | The University of Hong Kong Tencent AI Lab ARC Lab, Tencent PCG |
| Pseudocode | No | The paper describes its method in detail but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | Code and benchmark will be made available. |
| Open Datasets | No | To our best knowledge, there is no existing dataset for our proposed continual learning task of Ne RF and no proper evaluation metrics to evaluate the forgetting behavior. Therefore, in this section, we will first propose a new dataset containing two synthesis scenes, one real-world scene, and an extra city-level scene. Code and benchmark will be made available. |
| Dataset Splits | Yes | Furthermore, we captured all images at 960x640 resolution and partitioned the overall data into an 80% training set and a 20% validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models or cloud instance specifications used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | No | The paper states 'More implementation details are described in the supplementary file,' but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) in the main text. |