Multimodal Interaction-Aware Trajectory Prediction in Crowded Space
Authors: Xiaodan Shi, Xiaowei Shao, Zipei Fan, Renhe Jiang, Haoran Zhang, Zhiling Guo, Guangming Wu, Wei Yuan, Ryosuke Shibasaki11982-11989
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments over several trajectory prediction benchmarks demonstrate that our method is able to forecast various plausible futures in complex scenarios and achieves state-of-the-art performance. |
| Researcher Affiliation | Academia | 1Center for Spatial Information Science, the University of Tokyo 2Earth Observation Data Integration and Fusion Research Initiative, the University of Tokyo 3Information Technology Center, the University of Tokyo |
| Pseudocode | No | The paper describes the model architecture and equations but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | In this section, the proposed model is evaluated on two publicly available datasets: UCY(Lerner, Chrysanthou, and Lischinski 2007) and ETH(Pellegrini et al. 2009). |
| Dataset Splits | No | The paper mentions training and testing splits (leave-one-out approach) but does not explicitly describe a separate validation dataset split with details like percentages or sample counts for hyperparameter tuning or early stopping criteria. |
| Hardware Specification | Yes | The experiments are implemented using Pytorch under Ubuntu 16.04 LTS with a GTX 1080 GPU. |
| Software Dependencies | No | The paper mentions "Pytorch" and "Ubuntu 16.04 LTS" but does not provide version numbers for all key software components (e.g., PyTorch version is missing), which would be necessary for full reproducibility. |
| Experiment Setup | Yes | The size of hidden states of all LSTMs is set to 128. All the embedding layers are composed of a fully connected layer with size 128 and Re LU nonlinearty activation function. The batch size is set to 8 and all the methods are trained for 200 epochs. The optimizer RMSprop is used to train the proposed model with learning rate 0.001. We clip the gradients of LSTM with a maximum threshold of 10 to stabilize the training process. The model outputs GMMs with five components. |