RESA: Recurrent Feature-Shift Aggregator for Lane Detection

Authors: Tu Zheng, Hao Fang, Yi Zhang, Wenjian Tang, Zheng Yang, Haifeng Liu, Deng Cai3547-3554

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method achieves state-of-the-art results on two popular lane detection benchmarks (CULane and Tusimple). Code has been made available at: https://github.com/ZJULearning/resa.
Researcher Affiliation Collaboration 1 State Key Lab of CAD&CG, College of Computer Science, Zhejiang University, China 2 Fabu Inc., Hangzhou, China
Pseudocode No The paper provides mathematical formulas and illustrative diagrams but does not include a structured pseudocode or algorithm block.
Open Source Code Yes Code has been made available at: https://github.com/ZJULearning/resa.
Open Datasets Yes We conduct experiments on two widely used lane detection benchmark datasets: CULane dataset (Pan et al. 2018) and Tusimple Lane detection benchmark1. and 1https://github.com/Tu Simple/tusimple-benchmark/
Dataset Splits Yes The details of datasets are showed in Table 1. Dataset #Frame Train Validation Test Resolution Scenario Type #Lane Tu Simple 6,408 3,236 358 2,782 1280 720 highway 5 CULane 133,235 88,880 9,675 34,680 1640 590 urban, rural and highway 4
Hardware Specification Yes All models are trained with 4 NVIDIA 2080Ti GPUs (11G Memory) in Ubuntu.
Software Dependencies Yes All experiments are implemented with Pytorch1.1.
Experiment Setup Yes We use SGD (Bottou 2010) with momentum 0.9 and weight decay 1e-4 as the optimizer to train our model, and the learning rate is set 2.5e-2 for CULane and 2.0e-2 for Tusimple, respectively. We use warmup (Doll, Girshick, and Noordhuis 2017) strategy in the first 500 batches and then apply polynomial learning rate decay policy (Mishra and Sarawadekar 2019) with power set to 0.9. The loss function is the same as SCNN (Pan et al. 2018), which consists of segmentation BCE loss and existence classification CE loss. Considering the imbalanced label between background and lane markings, the segmentation loss of background is multiplied by 0.4. The batch size is set 8 for CULane and 4 for Tusimple, respectively. The total number of training epoch is set 50 for the Tu Simple dataset and 12 for the CULane dataset.