Next Point-of-Interest Recommendation with Inferring Multi-step Future Preferences

Authors: Lu Zhang, Zhu Sun, Ziqing Wu, Jie Zhang, Yew Soon Ong, Xinghua Qu

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three datasets demonstrate the superiority of CFPRec against state-of-the-arts.We conduct experiments to investigate the following research questions2. (RQ1) Does the proposed CFPRec outperform state-of-the-art baselines? (RQ2) How do different components of CFPRec affect its performance? (RQ3) How do key hyper-parameters of CFPRec affect its performance?
Researcher Affiliation Collaboration Lu Zhang1 , Zhu Sun2,3 , Ziqing Wu1 , Jie Zhang1 , Yew Soon Ong3,1 and Xinghua Qu4 1Nanyang Technological University, Singapore 2A*STAR Institute of High Performance Computing, Singapore 3A*STAR Centre for Frontier AI Research, Singapore 4Bytedance AI Lab, Singapore
Pseudocode Yes Algorithm 1: Training of the CFPRec
Open Source Code Yes Our code is available at https://github.com/wuziqi2/CFPRec
Open Datasets Yes We adopt three datasets collected from Foursquare [Yang et al., 2016] in three cities, i.e., Singapore (SIN), New York City (NYC) and Phoenix (PHO), as shown in Table 1.
Dataset Splits Yes We then split the trajectories of each user in the ratio of 8:1:1 based on timestamps, where the earliest 80% is regarded as training set; the second last 10% as validation set and the last 10% as test set.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments.
Software Dependencies No The paper mentions 'Pytorch' and 'Adam' but does not specify their version numbers or any other specific software dependencies with versions.
Experiment Setup Yes In particular, the embedding size D is searched in [20, 100] stepped by 20; the learning rate γ and regularization coefficient are searched in {0.0001, 0.001, 0.01, 0.1}. The rest parameters of each method are searched as suggested by the original papers. For CFPRec, we implement it with Pytorch, and Adam is adopted as the optimizer; D = 60/40/60 for SIN, NYC and PHO; the number of iterations are correspondingly set as 25/25/15 for SIN, NYC and PHO; γ = 0.0001; η = 1; the number of Transformer blocks is 1; the number of LSTM layers is 3/2/2 for SIN, NYC and PHO.