DeepAccident: A Motion and Accident Prediction Benchmark for V2X Autonomous Driving

Authors: Tianqi Wang, Sukmin Kim, Ji Wenxuan, Enze Xie, Chongjian Ge, Junsong Chen, Zhenguo Li, Ping Luo

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we propose Deep Accident, a large-scale dataset generated via a realistic simulator containing diverse accident scenarios that frequently occur in real-world driving. ... Finally, we present a baseline V2X model named V2XFormer that demonstrates superior performance for motion and accident prediction and 3D object detection compared to the single-vehicle model.
Researcher Affiliation Collaboration 1The University of Hong Kong 2Huawei Noah s Ark Lab 3Dalian University of Technology
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or a link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes In this work, we propose Deep Accident, a large-scale dataset generated via a realistic simulator containing diverse accident scenarios that frequently occur in real-world driving. ... Our main contributions can be summarized in three-fold: (i) Deep Accident, the first V2X dataset and benchmark that contains diverse collision accidents... Also cites KITTI (Geiger et al. 2013), nu Scenes (Caesar et al. 2020), Waymo (Sun et al. 2020).
Dataset Splits Yes Besides, we split the data with a ratio of 0.7, 0.15, and 0.15 for training, validation, and testing splits, resulting in 203k, 41k, and 41k samples, respectively.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU, CPU models) used for running the experiments.
Software Dependencies No The paper mentions the use of BEVerse and Swin Transformer, but does not provide specific version numbers for software dependencies or programming environments.
Experiment Setup Yes For 3D object detection, the BEV ranges are [-51.2m, 51.2m] for both X-axis and Y-axis with a 0.8m interval, while for motion prediction, the ranges are [-50.0m, 50.0m] with a 0.5m interval. The models use 1 second of past observations to predict 2 seconds into the future, corresponding to a temporal context of 3 past frames including the current frame and 4 future frames at 2Hz. We choose BEVerse-tiny as the singlevehicle model. For training, we train the models on the training split of Deep Accident for 20 epochs.