Spatio-Temporal Self-Supervised Learning for Traffic Flow Prediction

Authors: Jiahao Ji, Jingyuan Wang, Chao Huang, Junjie Wu, Boren Xu, Zhenhe Wu, Junbo Zhang, Yu Zheng

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on four benchmark datasets demonstrate that ST-SSL consistently outperforms various state-of-the-art baselines.
Researcher Affiliation Collaboration 1School of Computer Science & Engineering, Beihang University, China 2School of Economics & Management, Beihang University, China 3Department of Computer Science, Musketeers Foundation Institute of Data Science, University of Hong Kong, China 4JD Intelligent Cities Research, Beijing, China 5JD i City, JD Technology, Beijing, China
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Model implementation is available at https://github.com/Echo-Ji/ST-SSL.
Open Datasets Yes We evaluate our model on two types of public real-world traffic datasets summarized in Tab. 1. The first kind is about bike rental records in New York City. NYCBike1 (Zhang, Zheng, and Qi 2017) spans from 04/01/2014 to 09/30/2014, and NYCBike2 (Yao et al. 2019) spans from 07/01/2016 to 08/29/2016. [...] NYCTaxi (Yao et al. 2019) [...] BJTaxi (Zhang, Zheng, and Qi 2017)...
Dataset Splits Yes We use a sliding window strategy to generate samples, and then split each dataset into the training, validation, and test sets with a ratio of 7:1:2.
Hardware Specification No The paper states the model is implemented with PyTorch and experiments are conducted on the Lib City platform, but no specific hardware details (e.g., GPU/CPU models, memory) are provided.
Software Dependencies No The paper mentions implementation with 'PyTorch' and evaluation on the 'Lib City platform', but it does not provide specific version numbers for these software components.
Experiment Setup Yes The embedding dimension D is set as 64. Both the temporal and spatial convolution kernel sizes of ST encoder are set to 3. The perturbation ratios for both traffic-level and topology-level augmentations are set as 0.1. The training phase is performed using the Adam optimizer and the batch size of 32.