MFNP: A Meta-optimized Model for Few-shot Next POI Recommendation
Authors: Huimin Sun, Jiajie Xu, Kai Zheng, Pengpeng Zhao, Pingfu Chao, Xiaofang Zhou
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two real-world datasets show that our model outperforms the state-of-the-art methods on next POI recommendation for cold-start users. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Soochow University 2University of Electronic Science and Technology of China 3The Hong Kong University of Science and Technology |
| Pseudocode | Yes | Algorithm 1 The training process of MFNP |
| Open Source Code | No | The paper does not include a statement about releasing source code or provide a link to a code repository for the described methodology. |
| Open Datasets | Yes | We conduct experiments on two public LBSNs datasets, namely Foursquare [Yang et al., 2015] and Gowalla [Yin et al., 2013]. |
| Dataset Splits | Yes | We randomly separate the users into training and testing users with a ratio of 80:20. Especially, for all the meta-optimized methods, the checkin sequence of a single user are further divided into support set and query set by employing the data augmentation strategy (e.g., a sequence {v0, v1, v2, v3} can be divided into two new successive sequences: {v0, v1, v2} as the support set and {v0, v1, v2, v3} as the query set). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions the use of 'Adam' optimizer but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Following the work in [Feng et al., 2018], we set the dimension of embeddings and the hidden states to 500 for all deep learning-based methods. The K of region clusters is set to 30, while the number of cluster of users is set to 6. All the parameters in our model are optimized using the gradient descent optimization algorithm Adam with the batch size of 1 and the learning rate of 0.0001. |