DAMO-StreamNet: Optimizing Streaming Perception in Autonomous Driving

Authors: Jun-Yan He, Zhi-Qi Cheng, Chenyang Li, Wangmeng Xiang, Binghui Chen, Bin Luo, Yifeng Geng, Xuansong Xie

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our testing shows that DAMO-Stream Net surpasses current state-of-the-art methodologies, achieving 37.8% (normal size (600, 960)) and 43.3% (large size (1200, 1920)) s AP without requiring additional data. Our experiments demonstrate that DAMO-Stream Net surpasses existing SOTA methods, achieving 37.8% (normal size (600, 960)) and 43.3% (1200, 1920)) s AP without utilizing any extra data.
Researcher Affiliation Collaboration Jun-Yan He1 , Zhi-Qi Cheng2 , Chenyang Li1 , Wangmeng Xiang1 , Binghui Chen1 , Bin Luo1 , Yifeng Geng1 , Xuansong Xie1 1DAMO Academy, Alibaba Group 2Carnegie Mellon University
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The source code is at https://github.com/ zhiqic/DAMO-Stream Net.
Open Datasets Yes We utilized the Argoverse-HD dataset, which comprises various urban outdoor scenes from two US cities. We pretrained the base detector of our DAMO-Stream Net on the COCO dataset [Lin et al., 2014], following the methodology of Stream YOLO [Yang et al., 2022a].
Dataset Splits Yes We adhered to the train/validation split proposed by Li et al. [Li et al., 2020], with the validation set consisting of 15k frames.
Hardware Specification Yes We then trained DAMO-Stream Net on the Argoverse-HD dataset for 8 epochs with a batch size of 32, using 4 V100 GPUs. Table 6: Ablation study of inference time (ms) on V100.
Software Dependencies No The paper does not provide specific version numbers for software dependencies like Python, PyTorch, or other libraries.
Experiment Setup Yes We then trained DAMO-Stream Net on the Argoverse-HD dataset for 8 epochs with a batch size of 32, using 4 V100 GPUs. The normal input resolution (600, 960) was utilized unless specified otherwise. AK-Distillation is an auxiliary loss for DAMO-Stream Net training, with the weight of the loss set to 0.2/0.2/0.1 for DAMO-Stream Net S/M/L, respectively.