CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental Learning

Authors: Qingsong Yan, Qiang Wang, Kaiyong Zhao, Jie Chen, Bo Li, Xiaowen Chu, Fei Deng

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate our method, we use a challenging realworld dataset, Ne RFBuster, which provides 12 scenes under complex trajectories. Results demonstrate that CF-Ne RF is robust to rotation and achieves state-of-the-art results without providing prior information and constraints.Experiments of our method achieve state-of-the-art results on the Ne RFBuster dataset (Warburg et al. 2023) captured in the real world, proving that the CF-Ne RF can estimate accurate camera parameters with the specifically designed training procedure.
Researcher Affiliation Collaboration Qingsong Yan1, Qiang Wang2,*, Kaiyong Zhao3, Jie Chen4, Bo Li5, Xiaowen Chu6,5,*, Fei Deng1,7 1Wuhan University, Wuhan, China, 2Harbin Institute of Technology (Shenzhen), Shenzhen, China 3XGRIDS, Shenzhen, China, 4Hong Kong Baptist University, Hong Kong SAR, China 5The Hong Kong University of Science and Technology, Hong Kong SAR, China 6The Hong Kong University of Science and Technology (Guangzhou),Guangzhou, China 7Hubei Luojia Laboratory, Wuhan, China
Pseudocode No The paper describes the CF-Ne RF method using prose and diagrams (Figure 2), but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not provide any statement about releasing its own source code, nor does it include a specific link to a code repository for CF-Ne RF. The only GitHub link mentioned ('imporved-nerfmm. github. com/ventusff/improved-nerfmm') refers to a related work.
Open Datasets Yes Dataset We evaluate our method using a real-world dataset Ne RFBuster (Warburg et al. 2023), mainly rotating around an object.
Dataset Splits No To ensure a fair comparison and avoid the influence of varying network backbones across different methods, we uniformly use the Nerf Acc (Li, Tancik, and Kanazawa 2022), where we select one image for testing in every eight images and the remaining is for training.While a train/test split is mentioned, there is no explicit mention of a separate validation set split or how it was used.
Hardware Specification Yes Throughout all our experiments, we use the NVIDIA RTX3090.
Software Dependencies No CF-Ne RF is implemented using Py Torch.PyTorch is mentioned, but no specific version number is provided, nor are other software dependencies with version numbers.
Experiment Setup Yes Specifically, we set the learning rate of θ to 0.001, which undergoes a decay of 0.9954 every 200 epochs. Similarly, the learning rate of δ is set to 0.001 and undergoes a decay of 0.9000 every 2000 epochs. Here, we describe how to set the hyper-parameters in CF-Ne RF. We set Ninit and Npart to 3 to meet the minimum requirements that can filter outliers based on MVG. To balance drift and efficiency, we set Nglob to 5. Considering the input image resolution, we set d G to 3 to reconstruct all parameters by coarse-to-fine strategy. The most important parameter in CF-Ne RF is iteration, which is the epoch number for each image. During initialization, we set ξinit to 3000 to guarantee that θ and δ can be correctly initialized with fewer images. Subsequently, during the incremental training, we maintain a consistent value of ξ, setting ξ = ξloc = ξpart = ξglob = ξG to 900, thus reconstructing the scene from images one by one.