VectorMapNet: End-to-end Vectorized HD Map Learning
Authors: Yicheng Liu, Tianyuan Yuan, Yue Wang, Yilun Wang, Hang Zhao
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that Vector Map Net achieve strong map learning performance on both nu Scenes and Argoverse2 dataset, surpassing previous state-of-the-art methods by 14.2 m AP and 14.6m AP. |
| Researcher Affiliation | Collaboration | 1Shanghai Qi Zhi Institute 2Tsinghua University 3MIT 4Li Auto. Correspondence to: Hang Zhao <Zhao Hang0124@gmail.com>. |
| Pseudocode | Yes | Algorithm 1 The Algorithm of Discrete Fr echet Distance |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code for the methodology or links to a code repository. |
| Open Datasets | Yes | We conduct experiments on the nu Scenes (Caesar et al., 2020) and Argoverse2 (Wilson et al., 2021) dataset. |
| Dataset Splits | Yes | Argoverse2 We further conduct experiments on Argoverse2 (Wilson et al., 2021) dataset. Like nu Scenes, it contains 1000 logs (700, 150, 150 for training, validation and test set). |
| Hardware Specification | Yes | We train all our models on 8 GTX3090 GPUs for 110 epochs with a total batch size of 32. |
| Software Dependencies | No | The paper mentions software components like 'Res Net50', 'Point Net', 'Adam W optimizer', and 'Transformer', but does not provide specific version numbers for these or any other libraries or frameworks used (e.g., PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | We train all our models on 8 GTX3090 GPUs for 110 epochs with a total batch size of 32. We use Adam W (Loshchilov & Hutter, 2018) optimizer with a gradient clipping norm of 5.0. For the learning rate schedule, we use a step schedule that multiplies a learning rate by 0.1 at epoch 100 and has a linear warm-up period at the first 5000 steps. The dropout rate for all modules is 0.2, following the transformer s settings (Vaswani et al., 2017). |