Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

RETRACTED: GEONet: Global Enhancement and Optimization Network for Lane Detection

Authors: Suyang Xi, Yunhao Liu, Hong Ding, Mingshuo Wang, Zhenghan Chen, Xiaoxuan Liang

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that our proposed method significantly outperforms current state-of-the-art lane detection techniques. Our approach has been thoroughly validated on multiple benchmark datasets, demonstrating its state-of-the-art performance and robustness.
Researcher Affiliation Collaboration 1School of Electrical Engineering and Artificial Intelligence, Xiamen University 2School of Information Science and Engineering, Fudan University 3Microsoft 4College of Software, Xinjiang University 5School of Electrical and Computer Engineering, University of Massachusetts Amherst
Pseudocode No The paper describes the methodology using textual descriptions and architectural diagrams (Figure 2), but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing the source code for their methodology, nor does it provide a link to a code repository. The GitHub link mentioned (https://github.com/Tu Simple/tusimple-benchmark/) is for a dataset, not the authors' implementation.
Open Datasets Yes Our experimental investigations leverage two prominent and extensively benchmarked datasets in the realm of lane detection: CULane (Pan et al. 2018) and Tusimple 1. https://github.com/Tu Simple/tusimple-benchmark/
Dataset Splits No The paper mentions using CULane and Tu Simple datasets and discusses training procedures, but it does not explicitly state the training, validation, and test dataset splits (e.g., percentages or sample counts) used for these datasets.
Hardware Specification Yes Training was performed on a Ge Force RTX 4090 GPU.
Software Dependencies No The paper mentions the use of Adam W optimizer and various network architectures (ResNet, DLA, FPN), but does not provide specific version numbers for any software, programming languages, or libraries used in the implementation.
Experiment Setup Yes Input images are resized to 800 320 for all datasets. The Adam W optimizer with a cosine decay learning rate strategy is employed for optimization. For the CULane and Tu Simple datasets, we first perform WCL pre-training for 15 epochs with a learning rate of 4e-4 and a batch size of 40, followed by formal training for 15 epochs with a learning rate of 6e-4 and a batch size of 24, and then 70 epochs with a learning rate of 1e-3 and a batch size of 40. The weight for the angle loss is set to 15 across all datasets, and the interplay between the GRIo U Loss and Angle Loss is finely tuned through a hyperparameter α.