Taming the Long Tail in Human Mobility Prediction

Authors: Xiaohang Xu, Renhe Jiang, Chuang Yang, zipei fan, Kaoru Sezaki

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments with two real-world trajectory datasets demonstrate that Lo TNext significantly surpasses existing state-of-the-art works. and We evaluate our Lo TNext on two publicly available real-world LBSN datasets: Gowalla and Foursquare
Researcher Affiliation Academia Xiaohang Xu1, Renhe Jiang1 , Chuang Yang1, Zipei Fan1, Kaoru Sezaki1 1The University of Tokyo xhxu@g.ecc.u-tokyo.ac.jp {jiangrh, chuang.yang}@csis.u-tokyo.ac.jp {fanzipei, sezaki}@iis.u-tokyo.ac.jp
Pseudocode Yes Algorithm 1 Pseudo-code of training Lo TNext
Open Source Code Yes 2https://github.com/Yukayo/Lo TNext
Open Datasets Yes We evaluate our Lo TNext on two publicly available real-world LBSN datasets: Gowalla and Foursquare1 2
Dataset Splits No We then split each user s check-in records according to temporal order, using the first 80% for training and the remaining 20% for testing. The paper does not explicitly state a validation dataset split for purposes like hyperparameter tuning or early stopping.
Hardware Specification Yes We implement Lo TNext using Py Torch 1.13.1 on a Linux server equipped with 384GB RAM, 10-core Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz, and Nvidia RTX 3090 GPUs.
Software Dependencies Yes We implement Lo TNext using Py Torch 1.13.1
Experiment Setup Yes The embedding dimensions for POIs and users are set to 10, and the time embedding dimension is set to 6. For the Transformer architecture, we incorporate two multi-head attention mechanisms and 2 encoder blocks. For the spatial decay rate β, we follow the settings of Flashback [43]. and The results, shown in Figure 6(a) for Gowalla and Figure 6(c) for Foursquare, indicate that Acc@1 and MRR remain stable across different values, with the optimal threshold identified as δ = 0.5. Next, we vary the logit adjustment weight τ from 1 to 2 in increments of 0.2 to test the model s performance in balancing class imbalances. Figure 6(b) and Figure 6(d) reveal that τ = 1.2 yields the best results on both datasets, suggesting a moderate adjustment weight helps generalize better without overly amplifying rare classes.