DALDet: Depth-Aware Learning Based Object Detection for Autonomous Driving

Authors: Ke Hu, Tongbo Cao, Yuan Li, Song Chen, Yi Kang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the superiority and efficiency of DALDet. In particular, our DALDet ranks 1st on both KITTI Car and Cyclist 2D detection test leaderboards among all 2D detectors with high efficiency as well as yielding competitive performance among many leading 3D detectors.
Researcher Affiliation Academia 1University of Science and Technology of China, Hefei, China 2Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China 3Anhui University, Hefei, China
Pseudocode No The paper describes the model architecture and components but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Code will be available at https://github.com/hukefy/DALDet.
Open Datasets Yes KITTI Dataset The KITTI dataset (Geiger, Lenz, and Urtasun 2012) is a popular benchmark for autonomous driving
Dataset Splits Yes We divided the training data into a training set with 3712 samples and a validation set with 3769 samples following (Chen et al. 2015).
Hardware Specification Yes The model training and testing were conducted using Py Torch framework on NVIDIA Ge Force RTX 3090 GPU card.
Software Dependencies No The paper mentions using the 'Py Torch framework' but does not specify a version number or other software dependencies with version numbers.
Experiment Setup Yes The initial learning rate, batch size, and total number of epochs were set to 0.01, 32, and 300, respectively. During testing, we selected an Io U threshold of 0.3 for post-processing, and a maximum of 100 predictions were saved per image.