UC-NERF: Neural Radiance Field for Under-Calibrated Multi-View Cameras in Autonomous Driving
Authors: Kai Cheng, Xiaoxiao Long, Wei Yin, Jin Wang, Zhiqiang Wu, Yuexin Ma, Kaixuan Wang, Xiaozhi Chen, Xuejin Chen
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on the public datasets Waymo (Sun et al. (2020) and Nu Scenes (Caesar et al. (2020)) show that our method achieves high-quality renderings with a multi-camera system and outperforms other baselines by a large margin. |
| Researcher Affiliation | Collaboration | 1 Mo E Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China 2 The University of Hong Kong 3 PKU-Wuhan Institute for Artificial Intelligence 4 DJI Technology 5 Shanghai Tech University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states that Zip-NeRF, a baseline model, uses code implemented by Gu (2023) and provides a URL for that. However, it does not provide any concrete access (link, explicit statement of release) to the source code for their own method, UC-NeRF. |
| Open Datasets | Yes | We conduct experiments on two urban datasets with images captured with multi-camera settings, i.e., Waymo (Sun et al. (2020) and Nu Scenes (Caesar et al. (2020)). |
| Dataset Splits | No | The paper mentions selecting "one of every eight images of each camera as testing images and the remaining ones as training data" but does not explicitly describe a separate validation split or its proportion. It only specifies train and test data. |
| Hardware Specification | Yes | All methods are tested on one NVIDIA Tesla V100 GPU with an image resolution of 1920 × 1280. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and Superpoint for keypoint detection but does not provide specific version numbers for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, CUDA versions) that are critical for reproducibility. |
| Experiment Setup | Yes | We train our UC-Ne RF for 40k iterations using Adam optimizer with a batch size of 32384. The learning rate is logarithmically reduced from 0.008 to 0.001, with a warm-up phase consisting of 5000 iterations. The weight of sky loss is set to 2e-3. The dimension of sky latent code and foreground latent code is set to 4. For the MLP that decodes the latent code, we use three layers with 256 hidden units. The weight of transformation regularization is set to 2e-3. |