DeMo: Decoupling Motion Forecasting into Directional Intentions and Dynamic States
Authors: Bozhou Zhang, Nan Song, Li Zhang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both the Argoverse 2 and nu Scenes benchmarks demonstrate that our De Mo achieves state-of-the-art performance in motion forecasting. |
| Researcher Affiliation | Academia | Bozhou Zhang Nan Song Li Zhang School of Data Science, Fudan University |
| Pseudocode | No | No pseudocode or algorithm blocks were found. |
| Open Source Code | Yes | https://github.com/fudan-zvg/De Mo |
| Open Datasets | Yes | We evaluate our method s performance using the Argoverse 2 [67] and nu Scenes [3] motion forecasting datasets. |
| Dataset Splits | Yes | Ablation study on the core components of De Mo on the Argoverse 2 single-agent validation set. |
| Hardware Specification | Yes | All experiments are conducted on 8 NVIDIA Ge Force RTX 3090 GPUs. |
| Software Dependencies | No | The paper mentions software components like Adam W optimizer, nn.Layer Norm, and nn.GELU, but does not provide specific version numbers for these or any underlying libraries (e.g., PyTorch, TensorFlow). |
| Experiment Setup | Yes | Our models are trained for 60 epochs using the Adam W [42] optimizer, with a batch size of 16 per GPU. The training is conducted end-to-end with a learning rate of 0.003 and a weight decay of 0.01. |