How Do We Move: Modeling Human Movement with System Dynamics
Authors: Hua Wei, Dongkuan Xu, Junjie Liang, Zhenhui (Jessie) Li4445-4452
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In extensive experiments on real-world datasets, we demonstrate that the proposed method can generate trajectories similar to real-world ones, and outperform the state-of-the-art methods in predicting the next location and generating long-term future trajectories. |
| Researcher Affiliation | Academia | Hua Wei, Dongkuan Xu, Junjie Liang, Zhenhui (Jessie) Li College of Information Sciences and Technology, The Pennsylvania State University {hzw77, dux19, jul672, jessieli}@ist.psu.edu |
| Pseudocode | No | The paper describes the model components and training process in text and equations but does not include a formal pseudocode or algorithm block. |
| Open Source Code | No | The paper mentions 'all the parameter settings can be found in our codes' but does not provide a public link to the source code or an explicit statement of its release. |
| Open Datasets | Yes | We evaluate our method in two real-world travel datasets: the travel behavior data in a theme park and the travel behavior of vehicles in a road network. The state and action definitions in each environment are shown in Table 1. Theme Park. This is an open-accessed dataset1 that contains the tracking information for all visitors to a theme park, Dino Fun World, as is shown in Figure 3 (a). The footnote 1 points to http://vacommunity.org/VAST+Challenge+2015. |
| Dataset Splits | No | The paper refers to 'training data' and 'evaluation' but does not specify explicit percentages or sample counts for training, validation, and test splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using models like LSTM and MLP and frameworks like GAIL and GAN, but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Experimental Settings We specify some of the important parameters here and all the parameter settings can be found in our codes. In all the following experiments, if not specified, the observed time length Lin is set to be 10. The output length of Lout is 1 for the next location prediction task, and 1000 for trajectory generation task. We fix the length Lin and Lout for simplicity, but our methods can be easily extended to different lengths since the neural networks are recurrent in taking the trajectories as input and in predicting future trajectories. We sample the trajectories at every second for Theme Park, and at every 10 seconds for Route City. η is set as 0.8. |