PVALane: Prior-Guided 3D Lane Detection with View-Agnostic Feature Alignment
Authors: Zewen Zheng, Xuemin Zhang, Yongqiang Mou, Xiang Gao, Chengxin Li, Guoheng Huang, Chi-Man Pun, Xiaochen Yuan
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on the Open Lane and ONCE-3DLanes datasets demonstrate the superior performance of our method compared to existing state-of-the-art approaches and exhibit excellent robustness. |
| Researcher Affiliation | Collaboration | 1X Lab, GAC R&D CENTER, Guangdong, China 2Guangdong University of Technology, Guangdong, China 3South China Normal University, Guangdong, China 4University of Macau, Macau, China 5Macao Polytechnic University, Macao, China |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating the availability of open-source code for the described methodology. |
| Open Datasets | Yes | The experiments are conducted on two popular benchmark datasets of 3D lane detection: Open Lane (Chen et al. 2022) and Once-3DLanes (Yan et al. 2022). |
| Dataset Splits | Yes | Open Lane We present results on the Open Lane validation set in Table 1, from which it can be seen that PVALane achieves state-of-the-art results on F1 score and category accuracy. |
| Hardware Specification | Yes | Four A100s are used to train the model and the batch size is set to 32. |
| Software Dependencies | No | The paper mentions using 'Res Net-50' as CNN backbones and 'Adam optimization algorithm', but does not provide specific version numbers for any software components. |
| Experiment Setup | Yes | Anchor filtering threshold τ in Eq.( 5) is set to 0.2 and the maximum number of prior anchors is set to 1000. Four A100s are used to train the model and the batch size is set to 32. PVALane is trained in an end-to-end manner using the Adam optimization algorithm (Kingma and Ba 2017) with a learning rate of 2e 4. During training, λpri and λseg in Eq.( 14) are set to 1.0 and 0.1, respectively. |