Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Structure Guided Lane Detection

Authors: Jinming Su, Chao Chen, Ke Zhang, Junfeng Luo, Xiaoming Wei, Xiaolin Wei

IJCAI 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on public benchmark datasets show that the proposed approach outperforms stateof-the-art methods with 117 FPS on a single GPU.
Researcher Affiliation Industry Jinming Su , Chao Chen , Ke Zhang , Junfeng Luo , Xiaoming Wei and Xiaolin Wei Meituan EMAIL
Pseudocode No The paper describes methods and processes in narrative text and mathematical equations, but it does not include a distinct 'Pseudocode' or 'Algorithm' block or figure.
Open Source Code No The paper does not provide any specific links to source code repositories, nor does it contain explicit statements about the public release of its source code.
Open Datasets Yes To evaluate the performance of the proposed method, we conduct experiments on CULane [Pan et al., 2018] and Tusimple [Tu Simple, 2017] dataset.
Dataset Splits Yes CULane dataset has a split with 88,880/9,675/34,680 images for train/val/test and Tusimple dataset is divided into three parts: 3,268/358/2,782 for train/val/test.
Hardware Specification No The paper mentions '117 FPS on a single GPU' but does not specify the GPU model or any other hardware components used for experiments.
Software Dependencies No The paper mentions using 'Res Net' and 'Adam optimization algorithm' but does not provide specific version numbers for any software dependencies, such as libraries or programming languages.
Experiment Setup Yes We use Adam optimization algorithm to train our network end-to-end by optimizing the loss in Eq. (11). In the optimization process, the parameters of feature extractor are initialized by the pre-trained Res Net18/34 model and poly learning rate policy are employed for all experiments. The training images are resized to the resolution of 360 640 for faster training, and applied affine and flipping. And we train the model for 10 epochs on CULane and 60 epochs on Tu Simple. Moreover, we empirically and experimentally set the number of points P = 72, the width of rectangular Wanchor = 40, anchor strides Sanchor = 5 and anchor angle interval Aanchor = 5.