GSENet:Global Semantic Enhancement Network for Lane Detection

Authors: Junhao Su, Zhenghan Chen, Chenghao He, Dongzhi Guan, Changpeng Cai, Tongxi Zhou, Jiashen Wei, Wenhua Tian, Zhihuai Xie

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results reveal that the proposed method exhibits remarkable superiority over the current state-of-the-art techniques for lane detection. Our codes are available at: https://github.com/crystal250/GSENet.
Researcher Affiliation Academia Junhao Su1 , Zhenghan Chen4 , Chenghao He2 , Dongzhi Guan1 , Changpeng Cai1, Tongxi Zhou5, Jiasheng Wei6, Wenhua Tian1, Zhihuai Xie3 1Southeast University 2East China University of Science and Technology 3Tsinghua University 4Peking University 5Institute of Automation Chinese Academy of Sciences 6Fudan University
Pseudocode No The paper describes modules and formulas but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Our codes are available at: https://github.com/crystal250/GSENet.
Open Datasets Yes Our experiments were conducted on two widely recognized and extensively used lane detection datasets in the industry: CULane (Pan et al. 2018) and Tusimple (Tu Simple 2020). CULane is a large-scale lane detection data set consists of 88.9k training data, 9.7k verification data, and 34.7k test set data. Tu Simple is another large-scale lane detection dataset developed by the self-driving company Tucson. The data set consists of 3.3k training set, 0.4k validation set and 2.8k validation set...
Dataset Splits Yes CULane is a large-scale lane detection data set consists of 88.9k training data, 9.7k verification data, and 34.7k test set data. Tu Simple is another large-scale lane detection dataset... The data set consists of 3.3k training set, 0.4k validation set and 2.8k validation set...
Hardware Specification Yes In addition, our network is implemented based on pytorch framework and trained on a single Ge Force RTX 4090 GPU.
Software Dependencies No The paper mentions 'pytorch framework' but does not specify a version number or list other software dependencies with version numbers.
Experiment Setup Yes In terms of data processing, for all data sets, we cut the input data to 800 320. The same data augmentation: random affine transformation such as translation, rotation and scaling, random horizontal flips. In terms of optimization, the Adam W optimizer (Kingma and Ba 2014) and cosine decay learning rate strategy are adopted. Similar to (Zheng et al. 2022). In the CULane and Tui Simple datasets, we set epoch=15, lr=6e4, batchsize=24, epoch=70, lr=1.0e-3, batchsize=40, h=8, P=10, respectively. The angle loss weight in all datasets is set to 15, and the balance between GLIo U Loss and Angle Loss is controlled by a hyperparameter α, adjusting their proportions. The combined loss incorporating GLIo U Loss and Angle Loss is formulated as follows: Lcomb = α LGLIo U + (1 α) Langle. Based on our experiments, we define α as 0.98.