Personalized Ranking Metric Embedding for Next New POI Recommendation
Authors: Shanshan Feng, Xutao Li, Yifeng Zeng, Gao Cong, Yeow Meng Chee, Quan Yuan
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the stateof-the-art next POI recommendation methods. |
| Researcher Affiliation | Academia | 1Interdisciplinary Graduate School, Nanyang Technological University, Singapore, sfeng003@e.ntu.edu.sg 2School of Computer Engineering, Nanyang Technological University, Singapore, {lixutao@, gaocong@, qyuan1@e.}ntu.edu.sg 3School of Computing, Teesside University, UK, Y.Zeng@tees.ac.uk 4School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, ymchee@ntu.edu.sg |
| Pseudocode | Yes | Algorithm 1: PRME |
| Open Source Code | No | The paper does not provide an explicit statement or link to the open-source code for the described methodology. |
| Open Datasets | Yes | We use two publicly available datasets. The first dataset is the Four Square check-ins within Singapore [Yuan et al., 2013] while the second one is the Gowalla check-ins dataset within California and Nevada [Cho et al., 2011]. |
| Dataset Splits | Yes | For the one-year check-ins data, we use the check-ins in the first 10 months as training set, the 11th month as tuning set, and the last month as test set. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions software like 'Matlab' in related work (Table 2 in some versions, but not this PDF), but does not specify any software dependencies with version numbers for their own experimental setup. |
| Experiment Setup | Yes | In the experiments, we use the two datasets introduced in Section 3. For the one-year check-ins data, we use the check-ins in the first 10 months as training set, the 11th month as tuning set, and the last month as test set. We exploit two wellknown measure metrics [Yuan et al., 2013], namely Precision@N and Recall@N (denoted by Pre@N and Rec@N respectively). Given a user and his current location, we use the next check-in in successive τ hours as the ground truth. The time window threshold τ is set at 6 hours following [Cheng et al., 2013]. Based on the tuning set, the number of dimensions is set at K = 60, learning rate γ = 0.005, regularization term λ = 0.03 and component weight α = 0.2. |