CL3D: Unsupervised Domain Adaptation for Cross-LiDAR 3D Detection
Authors: Xidong Peng, Xinge Zhu, Yuexin Ma
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our method achieves state-of-the-art performance on cross-device datasets, especially for the datasets with large gaps captured by mechanical scanning Li DARs and solid-state Li DARs in various scenes. Project homepage is at https://github.com/4DVLab/CL3D.git. We conduct extensive experiments on various cross Li DAR and synthetic-to-real domain adaptation tasks, and all get state-of-the-art performance. We also conduct detailed ablation studies quantitatively and qualitatively to demonstrate the effectiveness of different modules of our method. |
| Researcher Affiliation | Academia | Xidong Peng1, Xinge Zhu3, Yuexin Ma1,2 * 1Shanghai Tech University 2Shanghai Engineering Research Center of Intelligent Vision and Imaging 3The Chinese University of Hong Kong {pengxd, mayuexin}@shanghaitech.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Project homepage is at https://github.com/4DVLab/CL3D.git. |
| Open Datasets | Yes | We consider five widely-used large-scale autonomous driving datasets to simulate the various domain shifts, which are Waymo (Sun et al. 2020), nu Scenes (Caesar et al. 2020), KITTI (Geiger, Lenz, and Urtasun 2012), Panda Set (Xiao et al. 2021), and Pre SIL (Hurl, Czarnecki, and Waslander 2019). |
| Dataset Splits | Yes | nu Scenes consists of 28130 training samples and 6019 validation samples collected by the 32-beam mechanical Li DAR and KITTI consists of 7,481 annotated lidar frames collected by the 64-beam mechanical Li DAR. Panda Set is the only dataset whose data is captured by solid-state Li DAR, including 5520 training samples and 2720 validation samples. |
| Hardware Specification | Yes | As for the implementation, we use the public py Torch (Paszke, Gross, and Massa 2019) repository MMDetection3D (Contributors 2020) and we perform experiments with a 24GB Ge Force RTX 3090 GPU. |
| Software Dependencies | No | As for the implementation, we use the public py Torch (Paszke, Gross, and Massa 2019) repository MMDetection3D (Contributors 2020) and we perform experiments with a 24GB Ge Force RTX 3090 GPU. |
| Experiment Setup | Yes | During both the pre-training and self-training processes, we adopt the widely adopted data augmentation, including random flipping, scaling, and rotation. The source data in pretraining process are trained for 20 epoch and target data in self-training process are trained for 1 epoch. Other settings are the same as official implementation of Center Point. |