GRLSTM: Trajectory Similarity Computation with Graph-Based Residual LSTM

Authors: Silin Zhou, Jing Li, Hao Wang, Shuo Shang, Peng Han

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The GRLSTM is evaluated using two real-world trajectory datasets, and the experimental results demonstrate that GRLSTM outperforms all the state-of-the-art methods significantly.
Researcher Affiliation Academia 1 University of Electronic Science and Technology of China 2 Harbin Institute of Technology, Shenzhen, China 3 School of Computer Science, Wuhan University, China 4 Sichuan Artificial Intelligence Research Institute, Yibin, 644000, China
Pseudocode No The paper describes the model architecture and components in detail within the text and figures (Figure 2, Figure 3), but it does not include any explicit pseudocode blocks or sections labeled 'Algorithm'.
Open Source Code Yes The detail of implementation can be referred in this website. 3https://github.com/slzhou-xy/GRLSTM
Open Datasets Yes We use taxi trajectories from the T-drive project 1. These taxi trajectories are collected by taxi id, GPS coordinates, and timestamp from 10,357 taxis during several days. 1https://www.microsoft.com/en-us/research/publication/tdrive-trajectory-data-sample
Dataset Splits Yes For both two datasets, we randomly split these data into training set, validation set, and test set in the ratio of [0.2, 0.1, 0.7].
Hardware Specification Yes Our model is implemented in Pytorch and trained on an Nvidia RTX3090 GPU.
Software Dependencies No Our model is implemented in Pytorch and trained on an Nvidia RTX3090 GPU. The paper mentions Pytorch but does not provide specific version numbers for it or any other software dependencies.
Experiment Setup Yes We use Adam (Kingma and Ba 2015) to optimize our model, and the learning rates are set as 5e-4 in the Beijing dataset and 1e-3 in the Newyork dataset. The training batch sizes are 256 in the Beijing dataset and 512 in the Newyork dataset. We set the layer of Residual-LSTM as 4. The number head of GAT is set as 8, and the number of GAT layer is 1. The k-nearest selection is set as 10 in Beijing dataset and 30 in Newyork dataset.