PRNet: Point-Range Fusion Network for Real-Time LiDAR Semantic Segmentation
Authors: Xiaoyan Li, Gang Zhang, Tao Jiang, Xufen Cai, Zhenhua Wang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on the Semantic KITTI and nu Scenes benchmarks demonstrate that the PRNet pushes the range-based methods to a new state-of-the-art, and achieves a better speed-accuracy trade-off. |
| Researcher Affiliation | Collaboration | Xiaoyan Li1,3 , Gang Zhang2 , Tao Jiang3 , Xufen Cai4 and Zhenhua Wang2 1University of Chinese Academy of Sciences 2Damo Academy, Alibaba Group 3Institute of Computing Technology, Chinese Academy of Sciences 4Beijing School |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. There is no specific repository link, explicit code release statement, or mention of code in supplementary materials. |
| Open Datasets | Yes | We evaluate the effectiveness and efficiency of the proposed PRNet on the public nu Scenes [Caesar et al., 2020] and Semantic KITTI [Behley et al., 2019] single-scan Li DAR semantic segmentation benchmarks. |
| Dataset Splits | Yes | nu Scenes for the Li DAR semantic segmentation is a newly released benchmark with 1,000 scenes collected in Boston and Singapore. It splits 28,130 samples for training, 6,019 for validation, and 6,008 for testing. ... The training set (19,130 scans) consists of sequences from 00 to 10 except 08, and the sequence 08 (4,071 scans) is used for validation. |
| Hardware Specification | Yes | All experiments are conducted with Py Torch FP32 on NVIDIA RTX 2080Ti GPU. |
| Software Dependencies | No | The paper mentions "Py Torch FP32" but does not specify a version number for PyTorch or other software dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | The proposed PRNet is trained from scratch for 48 epochs with a batch size of 16 on 8 GPUs. Stochastic gradient descent (SGD) serves as the optimizer with a weight decay of 0.001, a momentum of 0.9, and an initial learning rate of 0.02, which is decayed by 0.1 every 10 epochs. Following the convention, the data augmentation strategies include random flipping along the x and y axes, random global scale sampled from [0.95, 1.05], random rotation around the z axis, random Gaussian noise N(0, 0.02), and instance Cut Mix [Xu et al., 2021]. |