Generic and Dynamic Graph Representation Learning for Crowd Flow Modeling
Authors: Liangzhe Han, Ruixing Zhang, Leilei Sun, Bowen Du, Yanjie Fu, Tongyu Zhu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments have been conducted on two real-world datasets for four popular prediction tasks in crowd flow modeling. The result demonstrates that our method could achieve better prediction performance for all the tasks than baseline methods. |
| Researcher Affiliation | Academia | 1State Key Laboratory of Software Development Environment, Beihang University, Beijing, China 2Department of Computer Science, University of Central Florida, Florida, USA |
| Pseudocode | No | The paper describes methods using text and mathematical equations, but it does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code is available at https://github.com/liangzhehan/GDCF. |
| Open Datasets | Yes | The experiments are conducted on two real-world datasets: BJSubway, which contains IC card transaction records in Beijing Subway from 2017.6 to 2017.7, and NYTaxi, which contains taxi orders in Manhattan from 2019.1 to 2019.6 1. More details about these datasets are listed in Table 1. 1Data is available at https://www1.nyc.gov/site/tlc/about/tlctrip-record-data.page |
| Dataset Splits | Yes | Dataset BJSubway NYTaxi # Train Days 42 139 # Validation Days 7 21 # Test Days 7 21 |
| Hardware Specification | Yes | The proposed method is implemented with the Pytorch toolbox on a machine with 4 Tesla T4 GPUs. |
| Software Dependencies | No | The paper states 'The proposed method is implemented with the Pytorch toolbox' but does not provide a specific version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | Adam optimizer with an initial learning rate of 0.0001 and an early stopping strategy with patience 20 are utilized to train the proposed model in phase 1. In phase 2, the initial learning rate is 0.001 and the patience is set as 50. The learning rate of all deep learning methods is chosen from [0.01, 0.001, 0.0001, 0.00001] according to the best performance on the validation set. |