An Attentional Recurrent Neural Network for Personalized Next Location Recommendation
Authors: Qing Guo, Zhu Sun, Jie Zhang, Yin-Leng Theng83-90
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on multiple real-world datasets demonstrate that ARNN outperforms state-of-the-art methods. |
| Researcher Affiliation | Academia | Qing Guo,1 Zhu Sun,2 Jie Zhang,3 Yin-Leng Theng4 1,2,3,4Nanyang Technological University, Singapore 1,2,3,4{qguo006, zhu.sun, zhangj, tyltheng}@ntu.edu.sg |
| Pseudocode | Yes | Algorithm 1 describes the details about the random walk process. |
| Open Source Code | No | The paper does not provide explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We adopt several real-world datasets from Foursquare (Yang et al. 2015) and Gowalla1. ... 1https://snap.stanford.edu/data/loc-gowalla.html |
| Dataset Splits | Yes | We use the earliest 70% check-ins of each user as training set (70%), and the latest 20% as test set and remaining 10% as validation set. |
| Hardware Specification | No | The paper does not specify any particular hardware used for running the experiments (e.g., specific GPU or CPU models). |
| Software Dependencies | No | The paper does not explicitly list software dependencies with version numbers, such as programming languages, libraries, or frameworks used for implementation. |
| Experiment Setup | Yes | Parameter Settings. ... For the neighbor discovery procedure in ARNN, the distance threshold Δd = 2km, walk number walk num = 50 and walk length walk len = 10 for the meta-path based random walk. ... The 4 embedding sizes are set as the same (Du = Dl = Dt = Ds) and are 120/160/120 for NY, TK and SF, respectively. The number of hidden units, Dh, is set as 64 for all cities. The number of recurrent layers is 1. The epoch number is set as 20. The learning rate is 0.01. The regularization parameter λ is chosen as 0.01. The hidden state and cell state are initialized as zero. We use Stochastic Gradient Descent (Bottou 1991) and Back Propagation Through Time (Rumelhart, Hinton, and Williams 1986) to learn the parameters with a batch size of 64. |