LSGCN: Long Short-Term Traffic Prediction with Graph Convolutional Networks

Authors: Rongzhou Huang, Chuyin Huang, Yubao Liu, Genan Dai, Weiyang Kong

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments with three real-world traffic datasets verify the effectiveness of LSGCN.
Researcher Affiliation Academia 1Sun Yat-Sen University, Guangzhou, China 2Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, China liuyubao@mail.sysu.edu.cn, {huangrzh6, huangchy78, daign, kongwy3}@mail2.sysu.edu.cn
Pseudocode No The paper describes the model architecture and components but does not provide any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any links to open-source code for the methodology described.
Open Datasets Yes In the experiment, we use three real-world traffic datasets, namely Pe MSD4, Pe MSD7 and Pe MSD8, which are collected by California Performance of Transportation(Pe MS) [Chen et al., 2001] and widely used in the previous studies such as STGCN and ASTGCN [Yu et al., 2018; Guo et al., 2019].
Dataset Splits Yes The first 47 days are used as training set, and the remaining as validation and test set.
Hardware Specification Yes All experiments are performed on a Linux server (CPU: Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz, GPU: Ge Force RTX 2080 Ti).
Software Dependencies No The paper mentions using 'RMSprop optimizer' but does not specify software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions).
Experiment Setup Yes The batch size of Pe MSD4, Pe MSD7 and Pe MSD8 are 32, 32 and 16, respectively. The hyperparameters are set as follows. For all datasets, the channels of the first GLU, GCN, cos Att, the second GLU are 32, 32, 32 and 64, respectively. We set both the graph convolution kernel size K and GLU convolution kernel size Kt to 3. Our model is trained by minimizing the mean square error with RMSprop optimizer for 60 epochs. The initial learning rate is set as 10 3 with a decay rate of 0.7 after every 5 epochs.