Exploring Active 3D Object Detection from a Generalization Perspective
Authors: Yadan Luo, Zhuoxiao Chen, Zijian Wang, Xin Yu, Zi Huang, Mahsa Baktashmotlagh
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To validate the effectiveness and applicability of CRB, we conduct extensive experiments on the two benchmark 3D object detection datasets of KITTI and Waymo and examine both one-stage (i.e., SECOND) and two-stage 3D detectors (i.e., PVRCNN). |
| Researcher Affiliation | Academia | Yadan Luo , Zhuoxiao Chen , Zijian Wang, Xin Yu, Zi Huang, Mahsa Baktashmotlagh The University of Queensland, Australia |
| Pseudocode | Yes | The algorithm is summarized in the supplemental material. |
| Open Source Code | Yes | Source code: https://github.com/Luoyadan/CRB-active-3Ddet. |
| Open Datasets | Yes | Datasets. KITTI (Geiger et al., 2012) is one of the most representative datasets for point cloud based object detection. The Waymo Open dataset (Sun et al., 2020) is a challenging testbed for autonomous driving, containing 158,361 training samples and 40,077 testing samples. |
| Dataset Splits | Yes | The dataset consists of 3,712 training samples (i.e., point clouds) and 3,769 val samples. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running experiments. |
| Software Dependencies | No | The paper mentions developing an 'active-3D-det toolbox' but does not specify any software dependencies with version numbers (e.g., Python version, PyTorch version, CUDA version). |
| Experiment Setup | Yes | The K1 and K2 are empirically set to 300 and 200 for KITTI and 2, 000 and 1, 200 for Waymo. We specify the settings of hyper-parameters, the training scheme and the implementation details of our model and AL baselines in Sec. B of the supplementary material. |