Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Scalable Trajectory-User Linking with Dual-Stream Representation Networks
Authors: Hao Zhang, Wei Chen, Xingyu Zhao, Jianpeng Qi, Guiyuan Jiang, Yanwei Yu
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on check-in mobility datasets from three real-world cities and the nationwide U.S. demonstrate the superiority of Scale TUL over state-of-the-art baselines for large-scale TUL tasks. |
| Researcher Affiliation | Academia | 1Faculty of Information Science and Engineering, Ocean University of China, Qingdao, China 2The Hong Kong University of Science and Technology (Guangzhou) EMAIL,EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using textual explanations and a block diagram (Figure 1), but does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code of our model is available at https://github.com/sleevefishcode/Scalable TUL. |
| Open Datasets | Yes | We use real-world check-in data collected from the popular location-based social network platform Foursquare (Yang et al. 2015), selecting data from three cities and the entire United States for our dataset. |
| Dataset Splits | Yes | In our experiments, we use the first 80% of each user s sub-trajectories for training, and the remaining 20% for testing. Additionally, 20% of the training data is set aside as a validation set to assist with an early stopping mechanism to find the best parameters and avoid overfitting. |
| Hardware Specification | Yes | All experiments are conducted on a machine with Intel(R) Xeon(R) Silver 4214 (2.20GHz 12 cores) and NVIDIA Ge Force RTX 3090 (24GB Memory). |
| Software Dependencies | No | The paper does not explicitly state specific software dependencies with version numbers. It mentions models like RNN, Transformer, Structured State Space Models, and Bi-directional Long Short-Term Memory but not the software libraries or their versions used for implementation. |
| Experiment Setup | Yes | For Scale TUL, we set default embedding dimension to 512, apply an early stopping mechanism with patience to 5 to avoid over fitting, and adjust the learning rates as follows: In the first stage, the initial learning rate is set to 0.001 and decays by 20% every 5 epochs. In the second stage, the initial learning rate is set to 0.0005 and decays by 90% every 5 epochs. |