NeRF-LiDAR: Generating Realistic LiDAR Point Clouds with Neural Radiance Fields
Authors: Junge Zhang, Feihu Zhang, Shaochen Kuang, Li Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We verify the effectiveness of our Ne RF-Li DAR by training different 3D segmentation models on the generated Li DAR point clouds. It reveals that the trained models are able to achieve similar accuracy when compared with the same model trained on the real Li DAR data. Besides, the generated data is capable of boosting the accuracy through pre-training which helps reduce the requirements of the real labeled data. |
| Researcher Affiliation | Academia | Junge Zhang1, Feihu Zhang2, Shaochen Kuang3, Li Zhang1* 1Fudan University 2University of Oxford 3South China University of Technology |
| Pseudocode | No | The paper does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Code is available at https://github.com/fudanzvg/Ne RF-Li DAR |
| Open Datasets | Yes | We use the standard nu Scenes self-driving dataset for training and evaluation. Nu Scenes contains about 1000 scenes collected from different cities. (Caesar et al. 2020). |
| Dataset Splits | Yes | In the training set, we use a total of 7000 unlabeled Li DAR frames and 30000 images for training our Ne RF-Li DAR model. There are extra 1000 labeled Li DAR frames provided in these nu Scenes scenes. We mainly use these labeled data for testing and fine-tuning in the experiments. The first Test Set 1 consists of 400 labeled real point clouds that is extracted from the 30 reconstructed scenes which are not used for training. This validation set is from the same scenes as the simulation data. The second Test Set 2 is the whole nu Scenes validation set which consists of 5700 Li DAR point clouds from other nu Scenes scenes (not including the 30 selected scenes). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions software components and models like 'Seg Former', 'Cylinder3D', 'Range Net++', and 'VGG' but does not specify their version numbers or the versions of any underlying programming languages, libraries, or frameworks required for replication. |
| Experiment Setup | Yes | To evaluate the quality of the generated point clouds and point-wise labels, we train different Li DAR segmentation models (Cylinder3D (Zhou et al. 2021) and Range Net++ (Milioto et al. 2019)) on the generated data and compare the segmentation model with those models trained on the real nu Scenes Li DAR data(25k iterations). |