Transformation-Equivariant 3D Object Detection for Autonomous Driving

Authors: Hai Wu, Chenglu Wen, Wei Li, Xin Li, Ruigang Yang, Cheng Wang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted comprehensive experiments on both KITTI (Geiger, Lenz, and Urtasun 2012) and Waymo dataset (Sun et al. 2020).
Researcher Affiliation Collaboration 1 School of Informatics, Xiamen University 2 Inceptio Technology 3 School of Performance, Visualization, and Fine Art, Texas A&M University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/hailanyi/TED.
Open Datasets Yes We conducted comprehensive experiments on both KITTI (Geiger, Lenz, and Urtasun 2012) and Waymo dataset (Sun et al. 2020).
Dataset Splits Yes For the KITTI dataset, we follow recent work (Deng et al. 2021b; Wu et al. 2022b) to divide the training data into a train split of 3712 frames and a val split of 3769 frames.
Hardware Specification Yes We train all the detectors on two 3090 GPU cards with a batch size of four and an Adam optimizer with a learning rate of 0.01.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes We train all the detectors on two 3090 GPU cards with a batch size of four and an Adam optimizer with a learning rate of 0.01. For data augmentation, without rotation and reflection data augmentation, our method can achieve a high detection performance. With the data augmentation, we obtain slightly better results. Scaling, local augmentation and ground-truth sampling are also used.