Learning Time Slot Preferences via Mobility Tree for Next POI Recommendation
Authors: Tianhao Huang, Xuan Pan, Xiangrui Cai, Ying Zhang, Xiaojie Yuan
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The comprehensive experimental results demonstrate the superiority of MTNet over ten stateof-the-art next POI recommendation models across three realworld LBSN datasets, substantiating the efficacy of time slot preference learning facilitated by Mobility Tree. |
| Researcher Affiliation | Academia | Tianhao Huang1*, Xuan Pan12*, Xiangrui Cai134 , Ying Zhang13, Xiaojie Yuan123 1College of Computer Science, Nankai University 2Tianjin Key Laboratory of Network and Data Security Technology, Tianjin, China 3Key Laboratory of Data and Intelligent System Security, Ministry of Education, China 4Science and Technology on Communication Networks Laboratory, Shijiazhuang, China |
| Pseudocode | No | The paper describes the four-step node interaction operation and other model components but does not present them in pseudocode or a clearly labeled algorithm block. |
| Open Source Code | Yes | 1https://github.com/Skyyyy0920/MTNet |
| Open Datasets | Yes | We conduct experiments using three widely used datasets acquired from two LBSN platforms, namely Foursquare and Gowalla. Specifically, for Foursquare, we use data collected separately in Tokyo (Yang et al. 2015) and New York City (Yang et al. 2015) during the period between 12th April 2012 and 16th February 2013. For Gowalla, we utilize data collected in California and Nevada (Cho, Myers, and Leskovec 2011) spanning from February 2009 to October 2010. |
| Dataset Splits | Yes | Subsequently, We partition the datasets into training, validation, and test sets in chronological order. The training set, consisting of the initial 80% of check-ins, is used to train the model. Subsequently, the middle 10% of check-ins form the validation set, which is utilized for selecting the bestperforming model. Finally, we evaluate the model on the test set that consists of the last 10% of check-ins. |
| Hardware Specification | Yes | We develop MTNet1 based on Py Torch and conduct experiments on hardware with AMD Ryzen 7 4800H CPU and NVIDIA Ge Force RTX 2060 GPU. |
| Software Dependencies | No | The paper states 'We develop MTNet1 based on Py Torch' but does not specify the version number of PyTorch or any other software libraries used, nor specific versions of optimizers or other components. |
| Experiment Setup | Yes | We set the number of time slots to 12 for TKY and CA, and 4 for NYC, according to the performance on the validation set. The user and POI embedding dimensions are both set to 128, while the category and geography embedding dimensions are set to 32. The hidden size for Tree LSTM module is 512. We employ the Adam (Kingma and Ba 2014) optimizer with an initial learning rate of 1 10 3 and a weight decay rate of 1 10 4. We set the influence of day node η = 1 and period node δ = 1. We generate 60 clusters for geographical information representation. Moreover, we utilize a step-by-step learning rate scheduler with a step size of 6 and a decay factor of 0.9. For the Transformer component, we incorporate 2 transformer layers, each consisting of 2 attention heads and a dimension of 1024. Additionally, we randomly drop the embeddings and parameters with a dropout rate of 0.4 and 0.6 respectively. Finally, we run each model for a total of 50 epochs with a batch size of 1024. |