Multi-Range Attentive Bicomponent Graph Convolutional Network for Traffic Forecasting
Authors: Weiqi Chen, Ling Chen, Yu Xie, Wei Cao, Yusong Gao, Xiaojie Feng3529-3536
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two real-world road network traffic datasets, METR-LA and PEMS-BAY, show that our MRA-BGCN achieves the stateof-the-art results. |
| Researcher Affiliation | Collaboration | 1 College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China 2 Alibaba Group, Hangzhou 311121, China |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., a specific repository link or explicit code release statement) for its source code. |
| Open Datasets | Yes | We evaluate MRA-BGCN on two public traffic network datasets, METR-LA and PEMS-BAY (Li et al., 2018). |
| Dataset Splits | Yes | Both the datasets are split in chronological order with 70% for training, 10% for validation, and 20% for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Adam optimizer' but does not provide specific version numbers for any software dependencies (e.g., programming languages or libraries like Python, PyTorch, TensorFlow). |
| Experiment Setup | Yes | In experiments, the number of the BGCGRU layers is set to 2, with 64 hidden units. The maximum hop k of the bicomponent graph convolution is set to 3. We train our model by using Adam optimizer (Kingma and Ba 2014) to minimize the mean absolute error (MAE) for 100 epochs with the batch size as 64. The initial learning rate is 1e-2 with a decay rate of 0.6 per 10 epochs. In addition, the scheduled sampling (Bengio et al., 2015) and L2 normalization with a weight decay of 2e-4 is applied for better generalization. |