Discovering Subsequence Patterns for Next POI Recommendation
Authors: Kangzhi Zhao, Yong Zhang, Hongzhi Yin, Jin Wang, Kai Zheng, Xiaofang Zhou, Chunxiao Xing
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two real-world datasets demonstrate the effectiveness of our model. |
| Researcher Affiliation | Academia | 1BNRist, RIIT, Department of Computer Science and Technology, Tsinghua University 2School of Information Technology and Electrical Engineering, The University of Queensland 3Computer Science Department, University of California 4University of Electronic Science and Technology of China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a direct link to its source code or explicitly state that its code is released. |
| Open Datasets | Yes | Gowalla1 dataset includes the world-wide check-in data from February 2009 to October 2010. Foursquare2 dataset includes check-in data from April 2012 to September 2013 within the United States (except Alaska and Hawaii). |
| Dataset Splits | Yes | we rank the check-in history of each user in ascending order of the time step and split the dataset D into Dtrain : Dvalidation : Dtest as 8:1:1 according to previous studies [Liu et al., 2016; Chang et al., 2018]. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The following experimental results of ASPPA are based on the best tuning performance on the validation set (λ = k = 0.001, Ne = 128, Nh = 256, learning rate 0.01 and dropout 0.5 for both datasets). In the training procedure, we leverage the mini-batch SGD algorithm. Specifically, we set the batch size to 1024 according to device capacity. |