Learning Hierarchy-Enhanced POI Category Representations Using Disentangled Mobility Sequences
Authors: Hongwei Jia, Meng Chen, Weiming Huang, Kai Zhao, Yongshun Gong
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate the effectiveness of SD-CEM, we conduct comprehensive experiments using two check-in datasets covering three tasks. Experimental results demonstrate that SD-CEM outperforms several competitive baselines, highlighting its substantial improvement in performance as well as the understanding of learned category representations. |
| Researcher Affiliation | Academia | Hongwei Jia1 , Meng Chen1 , Weiming Huang2 , Kai Zhao3 , Yongshun Gong1 1School of Software, Shandong University 2School of Computer Science and Engineering, Nanyang Technological University 3 Robinson College of Business, Georgia State University |
| Pseudocode | No | The paper describes the model architecture and training process using mathematical formulas and textual descriptions, but it does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Data and source codes are at https://github.com/2837790380/SD-CEM. |
| Open Datasets | Yes | We adopt the public check-in data of Foursquare collected from the United States and Japan [Yang et al., 2016], and pre-process these check-ins following [Chen et al., 2021] to generate mobility sequences. |
| Dataset Splits | No | The paper mentions selecting optimal parameters using 'grid search with a small but adaptive step size,' which implies a validation process, but it does not explicitly provide details about a dedicated validation dataset split (e.g., specific percentages or sample counts for training, validation, and test sets). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions using specific models like Bi-LSTM and Transformer layers, but it does not specify software dependencies with version numbers (e.g., Python version, specific deep learning frameworks like PyTorch or TensorFlow, or library versions). |
| Experiment Setup | Yes | For Bi-LSTM layers, we set the number of layers at 4 and the dimension of the hidden vector at 128. For the stacked transformer layers, we set the number of layers and attention heads at 8, and the hidden size at 128. The weight λ is 0.8. The Adam learning rate is initialized as 0.001 with a linear decay. The dimension of the category representations e and g is set at {20, 30, 40, 50, 60, 70, 80}. The optimal parameters are selected using grid search with a small but adaptive step size. |