MobTCast: Leveraging Auxiliary Trajectory Forecasting for Human Mobility Prediction

Authors: Hao Xue, Flora Salim, Yongli Ren, Nuria Oliver

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experimental results, Mob TCast outperforms other state-of-the-art next POI prediction methods. Our approach illustrates the value of including different types of context in next POI prediction.
Researcher Affiliation Academia Hao Xue School of Computing Technologies RMIT University Melbourne, Australia hao.xue@rmit.edu.au Flora D. Salim School of Computing Technologies RMIT University Melbourne, Australia flora.salim@rmit.edu.au Yongli Ren School of Computing Technologies RMIT University Melbourne, Australia yongli.ren@rmit.edu.au Nuria Oliver ELLIS Unit Alicante Foundation Alicante, Spain nuria@alum.mit.edu
Pseudocode No The paper provides architectural diagrams and mathematical formulations but does not contain pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes In our experiments, we use three widely used LBSNs datasets: Gowalla [6], Foursquare-NYC [39] (FS-NYC), and Foursquare-Tokyo [39] (FS-TKY) (more details of these datasets are contained in Section 7.1 of the Appendix). These datasets are publicly available and no personally identifiable information is included.
Dataset Splits Yes Following [38], the observation length n is set to 20 and we split the check-in sequence of each user into 80% for training and 20% for testing. The hyperparameters are set based on the performance on the validation set which is 10% of the training set.
Hardware Specification Yes The model is trained using an Adam optimiser [18] and implemented using Py Torch on a desktop with an NVIDIA Ge Force RTX-2080 Ti GPU.
Software Dependencies No The paper mentions 'Py Torch' but does not specify a version number. It also mentions 'Adam optimiser' but this is an algorithm, not a software dependency with a version.
Experiment Setup Yes The hyperparameters are set based on the performance on the validation set which is 10% of the training set. The hidden dimensions for the Transformer used in the mobility feature extractor and the auxiliary task are both 128. According to Eq. (2) and (3), the sum of the dimensions of the POI, semantic and temporal embeddings equals the hidden dimensions of the Transformer F( ). Thus, we set the dimensions of these embeddings as 80, 24, and 24. As for the weights of three loss functions in Eq. (15), all three θs are set to 1. For the auxiliary task, we process the dataset by normalising the coordinates (latitude and longitude) within [-1, 1].