DBQ-SSD: Dynamic Ball Query for Efficient 3D Object Detection

Authors: Jinrong Yang, Lin Song, Songtao Liu, Weixin Mao, Zeming Li, Xiaoping Li, Hongbin Sun, Jian Sun, Nanning Zheng

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our method can increase the inference speed by 30%-100% on KITTI, Waymo, and ONCE datasets.
Researcher Affiliation Collaboration 1Huazhong University of Science and Technology 2Tencent AI Lab 3MEGVII Technology 4Xi an Jiaotong University
Pseudocode No The paper describes the Dynamic Ball Query (DBQ) network and its inference and training procedures, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper states, "All experiments are implemented by Open PCDet 1 framework. 1https://github.com/open-mmlab/Open PCDet". This link points to a general open-source framework, not the specific code or modifications developed for this paper's methodology.
Open Datasets Yes We evaluate our detector on two representative datasets: KITTI dataset (Geiger et al., 2012) and Waymo dataset (Sun et al., 2020)... ONCE (Mao et al., 2021b).
Dataset Splits Yes We randomly sample 16,384 points from the overall point cloud per single view frame... The batch size is set to 16 with 8 GPUs... The initial learning rate is 0.01 and is decayed by 0.1 at 35 and 45 epochs.
Hardware Specification Yes Latency here is evaluated by a single RTX2080Ti GPU with a batch size of 16.
Software Dependencies No The paper mentions using "ADAM (Kingma & Ba, 2014) optimizer with onecycle learning strategy (Smith & Topin, 2019)" and that "All experiments are implemented by Open PCDet 1 framework". However, it does not provide specific version numbers for any software libraries or dependencies, such as Python, PyTorch, or Open PCDet itself.
Experiment Setup Yes We randomly sample 16,384 points from the overall point cloud per single view frame. We train our model by ADAM (Kingma & Ba, 2014) optimizer with onecycle learning strategy (Smith & Topin, 2019). The batch size is set to 16 with 8 GPUs. The initial learning rate is 0.01 and is decayed by 0.1 at 35 and 45 epochs.