Successive POI Recommendation via Brain-Inspired Spatiotemporal Aware Representation
Authors: Gehua Ma, He Wang, Jingyuan Zhao, Rui Yan, Huajin Tang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments on large real-world datasets to demonstrate the effectiveness of STEP, and our method outperforms baselines according to experimental results. |
| Researcher Affiliation | Collaboration | Gehua Ma1,2, He Wang1,2, Jingyuan Zhao3,4, Rui Yan5, Huajin Tang1,2,6* 1 College of Computer Science and Technology, Zhejiang University; 2 The State Key Lab of Brain-Machine Intelligence, Zhejiang University 3 Group Data, The Great Eastern Life Assurance Company Limited 4 Department of Statistics & Data Science, National University of Singapore 5 College of Computer Science and Technology, Zhejiang University of Technology 6 MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University |
| Pseudocode | No | The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | The Instagram Check-in dataset (Chang et al. 2018) was collected from Instagram in New York, and the data was preprocessed in the same manner as previous works (Zhao et al. 2016, 2017). The Gowalla dataset is a globallycollected large-scale social media dataset (Cho, Myers, and Leskovec 2011). |
| Dataset Splits | Yes | Check-in sequences are sorted by timestamps; the first 70% is used as a training set, and the remaining 30% for validation and testing. |
| Hardware Specification | No | The paper does not specify the hardware used for the experiments, such as CPU or GPU models. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We set context window size in the POI sequential model to 2 and adopt h = 2, m = 11 for building Gst. We utilize Adam optimizer with batch size 512, β1 = 0.9, β2 = 0.999 and set the initial learning rate to 0.001 followed by a reduceon-plateau decay policy, the decay factor is 0.1 during the training. Weighting factors , λspa are set to 1 10 4, 0.2 and the embedding dimensions {dseq, dspa, dst} are set to {32, 64, 96}. |