EDA: Evolving and Distinct Anchors for Multimodal Motion Prediction

Authors: Longzhong Lin, Xuewu Lin, Tianwei Lin, Lichao Huang, Rong Xiong, Yue Wang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have performed experiments on the Waymo Open Motion Dataset (2021).
Researcher Affiliation Collaboration Longzhong Lin1,2, Xuewu Lin2, Tianwei Lin2, Lichao Huang2, Rong Xiong1, Yue Wang1* 1Zhejiang University 2Horizon Robotics
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Appendix and code are available at https://github.com/Longzhong-Lin/EDA.
Open Datasets Yes We assess our method on the large-scale Waymo Open Motion Dataset (WOMD) proposed by Ettinger et al., which extracts interesting behaviors from actual traffic scenes. The WOMD (Ettinger et al. 2021) includes 487k training scenes, 44k validation and 44k testing scenes, where each scene contains up to 8 target agents.
Dataset Splits Yes The WOMD (Ettinger et al. 2021) includes 487k training scenes, 44k validation and 44k testing scenes, where each scene contains up to 8 target agents.
Hardware Specification Yes We train the model for 30 epochs on 16 GPUs (NVDIA RTX 3090) with the batch size of 80 scenes.
Software Dependencies No The paper mentions implementing ideas on the MTR framework and using specific loss functions (BCE, CE), but does not provide specific version numbers for software dependencies like PyTorch, CUDA, or other libraries.
Experiment Setup Yes We train the model for 30 epochs on 16 GPUs (NVDIA RTX 3090) with the batch size of 80 scenes. The predefined anchors we use are the 64 intention points generated by a k-means clustering algorithm on the training set, as used in MTR. To improve the m AP metrics, the MTR (2022a) scales the original top 6 scores for each sample through dividing them by their sum, making the scores comparable across different agents.