MFTraj: Map-Free, Behavior-Driven Trajectory Prediction for Autonomous Driving
Authors: Haicheng Liao, Zhenning Li, Chengyue Wang, Huanming Shen, Dongping Liao, Bonan Wang, Guofa Li, Chengzhong Xu
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluations on the Argoverse, NGSIM, High D, and Mo CAD datasets underscore MFTraj s robustness and adaptability, outperforming numerous benchmarks even in data-challenged scenarios without the need for additional information such as HD maps or vectorized maps. |
| Researcher Affiliation | Academia | Haicheng Liao1 , Zhenning Li1 , Chengyue Wang1 , Huanming Shen2 , Dongping Liao1 , Bonan Wang1 , Guofa Li3 , Chengzhong Xu1 1University of Macau 2University of Electronic Science and Technology of China 3Chongqing University |
| Pseudocode | No | The paper describes its methods using prose, equations, and architectural diagrams, but it does not include pseudocode or a clearly labeled algorithm block. |
| Open Source Code | No | The paper does not provide any explicit statement about making its source code publicly available or include a link to a code repository. |
| Open Datasets | Yes | Datasets. We tested model s efficacy on Argoverse [Chang et al., 2019], NGSIM [Deo and Trivedi, 2018], High D [Krajewski et al., 2018], and Mo CAD [Liao et al., 2024b] datasets. |
| Dataset Splits | No | The paper describes data segmentation for observation and prediction horizons but does not specify explicit training, validation, or test dataset splits (e.g., percentages or counts). |
| Hardware Specification | Yes | We implemented our model using Py Torch and Py Torch-lightning on an NVIDIA DGX-2 with eight V100 GPUs. |
| Software Dependencies | No | The paper mentions 'Py Torch and Py Torch-lightning' but does not specify version numbers for these software components. |
| Experiment Setup | Yes | Using the smooth L1 loss as our loss function, the model was trained with the Adam optimizer, a batch size of 32, and learning rates of 10 3 and 10 4. |