GMAN: A Graph Multi-Attention Network for Traffic Prediction
Authors: Chuanpan Zheng, Xiaoliang Fan, Cheng Wang, Jianzhong Qi1234-1241
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two real-world traffic prediction tasks (i.e., traffic volume prediction and traffic speed prediction) demonstrate the superiority of GMAN. |
| Researcher Affiliation | Academia | 1Fujian Key Laboratory of Sensing and Computing for Smart Cities, Xiamen University, Xiamen, China 2Digital Fujian Institute of Urban Traffic Big Data Research, Xiamen University, Xiamen, China 3School of Informatics, Xiamen University, Xiamen, China 4School of Computing and Information Systems, University of Melbourne, Melbourne, Australia |
| Pseudocode | No | The paper does not contain pseudocode or a clearly labeled algorithm block. |
| Open Source Code | Yes | The source code is available at https://github.com/zhengchuanpan/GMAN. |
| Open Datasets | Yes | (1) traffic volume prediction on the Xiamen dataset (Wang et al. 2017), which contains 5 months of data recorded by 95 traffic sensors ranging from August 1st, 2015 to December 31st, 2015 in Xiamen, China; (2) traffic speed prediction on the Pe MS dataset (Li et al. 2018b)), which contains 6 months of data recorded by 325 traffic sensors ranging from January 1st, 2017 to June 30th, 2017 in the Bay Area. |
| Dataset Splits | Yes | We use 70% of the data for training, 10% for validation, and 20% for testing. |
| Hardware Specification | No | The paper mentions running experiments and computation time, but does not specify any hardware details like GPU/CPU models or memory. |
| Software Dependencies | No | The paper mentions the Adam optimizer and ReLU activation function, but does not provide specific version numbers for software dependencies or libraries used for implementation (e.g., Python, TensorFlow, PyTorch versions). |
| Experiment Setup | Yes | We use P = 12 historical time steps (1 hour) to predict the traffic conditions of the next Q = 12 steps (1 hour). We train our model using Adam optimizer (Kingma and Ba 2015) with an initial learning rate of 0.001. In the group spatial attention, we partition the vertices into G = 19 groups in the Xiamen dataset and G = 37 groups in the Pe MS dataset, respectively. The number of traffic conditions on both datasets is C = 1. Totally, there are 3 hyperparameters in our model, i.e., the number of STAttention blocks L, the number of attention heads K, and the dimensionality d of each attention head (the channel of each layer D = K d). We tune these parameters on the validation set, and observe the best performance on the setting L = 3, K = 8, and d = 8 (D = 64). |