Hierarchical Graph Convolution Network for Traffic Forecasting
Authors: Kan Guo, Yongli Hu, Yanfeng Sun, Sean Qian, Junbin Gao, Baocai Yin151-159
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed method is evaluated on two complex city traffic speed datasets. Compared to the latest GCN based methods like Graph Wave Net, the proposed HGCN gets higher traffic forecasting precision with lower computational cost.The website of the code is https://github.com/guokan987/HGCN.git. |
| Researcher Affiliation | Academia | 1Faculty of Information Technology, Beijing University of Technology, Beijing 2Civil and Environmental Engineering and H.John Heinz III College, Carnegie Mellon University, Pittsburgh 3Business School, The University of Sydney, Australia 4Peng Cheng Laboratory, Shenzhen, 518055, China |
| Pseudocode | Yes | Algorithm 1 Spatial-Temporal Block; Algorithm 2 The HGCN algorithm for traffic forecasting. |
| Open Source Code | Yes | The website of the code is https://github.com/guokan987/HGCN.git. |
| Open Datasets | Yes | Two traffic speed datasets used in our experiments are collected by Didi Chuxing GAIA Initiative (https: //gaia.didichuxing.com) in Ji Nan and Xi An cities in China, as Figure 3 shown. |
| Dataset Splits | Yes | Each dataset is splitted into 60% for training, 20% for validation and 20% for test with chronological order. |
| Hardware Specification | Yes | The proposed model is implemented by Pytorch 1.2.0 on a virtual workstation with a 11G memory Nvidia RTX 2080Ti. |
| Software Dependencies | Yes | The proposed model is implemented by Pytorch 1.2.0 on a virtual workstation with a 11G memory Nvidia RTX 2080Ti. |
| Experiment Setup | Yes | In our experiments, we set the order of GCN s order M = 3 in the S-T Block according to the previous works (N.Kipf and Welling 2017; Wu et al. 2019). The size of the temporal kernel ts = 3 in S-T Block 1 and S-T Block 3, and ts = 2 in S-T Block 2 and S-T Block 4. The forecasting time interval and input time interval is equal, i.e. T1 = 12, T2 = 12, thus tw = 3. The feature sizes are D = 1, D1 = 32, D2 = 256, D3 = 512. The size of initialize node embedding in Aadp is 10, i.e. E = 10. To select optimal setting of the number of regions N R, we let N R = 20 for Ji Nan dataset and N R = 40 for Xi An dataset according to the results on the validation-set, which are shown in Figure 4. The batch size is 64. The Adam Optimization is utilized. The original learning rate is 0.001. We train 50 epochs in the training phase. |