Flexible 3D Lane Detection by Hierarchical Shape Matching
Authors: Zhihao Guan, Ruixin Liu, Zejian Yuan, Ao Liu, Kun Tang, Tong Zhou, Erlong Li, Chao Zheng, Shuqi Mei
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on two datasets show that we overwhelm current top methods under high precision standards, and full ablation studies also verify each part of our method. |
| Researcher Affiliation | Collaboration | Zhihao Guan1, Ruixin Liu1, Zejian Yuan1, Ao Liu2, Kun Tang2, Tong Zhou2, Erlong Li2, Chao Zheng2, Shuqi Mei2 1Institute of Artificial Intelligence and Robotics, Xi an Jiaotong University, China 2T Lab, Tencent Map, Tencent, China |
| Pseudocode | No | The paper describes its methodology using text and figures, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our codes will be released at https://github.com/Doo-do/FHLD. |
| Open Datasets | Yes | Dataset: Experiments are carried out on two point cloud dataset, a self-collected one named Road BEV and sub KCUD, a subset of public KAIST Complex Urban Dataset (Jeong et al. 2019) annotated by ourselves, since there are no readily available large-scale public datasets with 3D lane annotations for the HD map construction task. |
| Dataset Splits | No | For each dataset, we randomly set 80% sequences for training and 20% for testing. The paper does not explicitly mention a separate validation split or how it was used. |
| Hardware Specification | Yes | FPS is calculated with batch size 1 on Nvidia RTX3090 implemented by Pytorch. |
| Software Dependencies | No | The paper mentions 'Pytorch' but does not specify a version number or other software dependencies with their specific versions, which are necessary for full reproducibility. |
| Experiment Setup | Yes | We augment each input BEV map by random shifting, rotating, flipping, scaling, and cropping, transforming it to a size of 640 640. We train each model for 200k iterations with batch size 64 and initial learning rate 0.0001 using Adam optimizer. The loss coefficients λ1, λ2 and λ3 are set to 1, 1 and 0.5, anchors sampling ratio is 1:100, and N=15, B=40. If multiple segments appeared in one cell, the one closest to the cell center is chosen as the ground truth in this cell. For better performance, LLSM is not optimized for first 2k iterations. |