PolarMix: A General Data Augmentation Technique for LiDAR Point Clouds
Authors: Aoran Xiao, Jiaxing Huang, Dayan Guan, Kaiwen Cui, Shijian Lu, Ling Shao
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that Polar Mix achieves superior performance consistently across different perception tasks and scenarios. |
| Researcher Affiliation | Collaboration | 1School of Computer Science and Engineering, Nanyang Technological University 2Mohamed bin Zayed University of Artificial Intelligence 3Terminus Group, China |
| Pseudocode | Yes | Algorithm 1 Polar Mix. |
| Open Source Code | Yes | Code is available at https://github.com/xiaoaoran/polarmix |
| Open Datasets | Yes | The first is Semantic KITTI [1]... The second is nu Scenes-lidarseg [12] dataset... The third is Semantic POSS [30]... Syn Li DAR [46] is a synthetic Li DAR point cloud dataset... |
| Dataset Splits | Yes | Semantic KITTI: We follow the widely-adopted split and use sequences 00-07, 09-10 as the training set and sequence 08 for validation. ... nu Scenes-lidarseg: We follow the officially split of training data and validation data. ... Semantic POSS: We follow the official benchmark setting, i.e. sequence 03 for validation and the rest for training. |
| Hardware Specification | Yes | We conducted experiments with a single Tesla 2080Ti GPU for Mink Net and SPVCNN and a Tesla V100 GPU for Rand LA-Net and Cylinder3D. |
| Software Dependencies | No | The paper mentions using open-source repositories for various networks (e.g., Mink Net, SPVCNN, Rand LA-Net, Cylinder3D, Open PCDet) but does not provide specific version numbers for any software dependencies like libraries or frameworks. |
| Experiment Setup | Yes | We adopt the default training hyper-parameters in the open-source repositories for all four networks, and the only modification is the batch size for SPVCNN and Mink Net (we change it to 8). ... For augmentation with scene-level swapping, we randomly crop 180 sectors from 360 for [α, β] for point swapping. ... We set δ1, δ2 as 0.5, 1, respectively. |