Mutual Distillation Learning Network for Trajectory-User Linking
Authors: Wei Chen, ShuZhe Li, Chao Huang, Yanwei Yu, Yongguo Jiang, Junyu Dong
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two real-world check-in mobility datasets demonstrate the superiority of Main TUL against state-of-the-art baselines. The source code of our model is available at https://github.com/Onedean/Main TUL. |
| Researcher Affiliation | Academia | 1College of Computer Science and Technology, Ocean University of China 2Department of Computer Science, The University of Hong Kong |
| Pseudocode | No | The paper does not contain any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code of our model is available at https://github.com/Onedean/Main TUL. |
| Open Datasets | Yes | We use two real-world check-in mobility datasets [Liu et al., 2014; Yang et al., 2015] collected from two popular location-based social network platforms, i.e., Foursquare2 and Weeplaces3. |
| Dataset Splits | Yes | In experiments, we use the first 80% of sub-trajectories of each user for training and the remaining 20% for testing, and select 20% training data as the validation set to cooperate with the early stop mechanism to find the best parameters and avoid overfitting. |
| Hardware Specification | No | The paper does not specify the hardware used (e.g., CPU, GPU models) for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For Main TUL, we set check-in embedding dimension d to 512, λ to 10, use early stopping mechanism, and set patience to 3 to avoid over fitting. The learning rate is initially set to 0.001 and decays by 10% every 5 epochs. |