Identifying Human Mobility via Trajectory Embeddings
Authors: Qiang Gao, Fan Zhou, Kunpeng Zhang, Goce Trajcevski, Xucheng Luo, Fengli Zhang
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments conducted on real-world datasets demonstrate that TULER achieves better accuracy than the existing methods. We now present our experiments, comparing TULER with several baseline methods on two public datasets. |
| Researcher Affiliation | Academia | University of Electronic Science and Technology of China, Chengdu, China University of Maryland, College park Northwestern University, Evanston |
| Pseudocode | No | The paper describes the LSTM and GRU models using mathematical equations but does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Source code, datasets and implementation details are available online at https://github.com/gcooq/TUL. |
| Open Datasets | Yes | To show the performance of TULER and the comparison with some existing methods, we conduct our experiments on two publicly available LBSN datasets: Gowalla and Brightkite [Cho et al., 2011]. |
| Dataset Splits | No | Table 1 shows 'S/T: the number of trajectories for training and testing' (e.g., Gowalla 17,654/2,063), but does not explicitly mention a validation split or specific percentages for training, validation, and testing. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions general software components like RNN variants (LSTM, GRU) but does not provide specific version numbers for programming languages, libraries, or other software dependencies used in the experiments. |
| Experiment Setup | Yes | Table 2: Parameters used in TULER and baselines. Parameters We choose Possible Choices Dimensionality 250 100-300 Hidden size 300 250-1000 Learning rate 0.00095 0.00085-0.1 Dropout rate 0.5 0-1 Stacked TULER 2 2 |