GSTNet: Global Spatial-Temporal Network for Traffic Flow Prediction
Authors: Shen Fang, Qi Zhang, Gaofeng Meng, Shiming Xiang, Chunhong Pan
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the real world datasets verify the effectiveness and superiority of the proposed method on both the public transportation network and the road network. and 4 Experiments |
| Researcher Affiliation | Academia | 1NLPR, Institute of Automation, Chinese Academy of Sciences 2School of Artificial Intelligence, University of Chinese Academy of Sciences {shen.fang, qi.zhang2015, gfmeng, smxiang, chpan}@nlpr.ia.ac.cn |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. |
| Open Source Code | No | The paper does not provide any statement or link regarding the public availability of its source code. |
| Open Datasets | Yes | To further verify the generalization of the proposed model, different methods are also compared on a public dataset: the Pe MS-BAY Dataset [Li et al., 2018] for traffic speed prediction. |
| Dataset Splits | No | The paper mentions using historical data and test samples, but does not provide specific details on train/validation/test dataset splits, such as percentages, sample counts, or methodologies. |
| Hardware Specification | No | The paper mentions 'affordable computing resources' and 'same computing resources' but provides no specific details regarding the type of hardware used for experiments, such as GPU or CPU models, or memory specifications. |
| Software Dependencies | No | The paper mentions using specific algorithms like 'Switchable Normalization' and the 'Adam algorithm', but does not provide specific software dependency details with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | Network Structure and Learning Strategy. The model contains two layers of spatial-temporal blocks. The temporal module consists of three convolution layers. The length of convolution kernel is three in each layer, and the output channels is 8. The 24 output channels are then reduced to 8 channels. The hidden channels and output channels in spatial module is set to 8. The length of graph convolution kernel is three. The embedded Gaussian kernel is the default option. The hyperparameter β is set to β = 2. Leakly Re LU is selected as the non-linear activation function. The normalization method is chosen as the Switchable Normalization [Luo et al., 2018]. The optimizer is the Adam algorithm [Kingma and Ba, 2014] and the learning rate is set to α = 1e 3. |