Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Graph-based Multi-ODE Neural Networks for Spatio-Temporal Traffic Forecasting

Authors: Zibo Liu, Parshin Shojaee, Chandan K. Reddy

TMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive set of experiments conducted on six real-world datasets demonstrate the superior performance of GRAM-ODE compared with state-of-the-art baselines as well as the contribution of different components to the overall performance. We conduct experiments on six real-world datasets and seven baseline models to evaluate the effectiveness of our proposed GRAM-ODE and its components for the traffic forecasting task. We also conducted experiments using three distinct random seeds across various cross-validation splits on the PEMS-BAY dataset.
Researcher Affiliation Academia Zibo Liu EMAIL Parshin Shojaee EMAIL Chandan K. Reddy EMAIL Department of Computer Science, Virginia Tech, Arlington, VA
Pseudocode Yes Algorithm 1 provides the pseudocode for the GRAM-ODE layer by sequentially passing the input via TCN, Multi ODE-GNN, and another TCN blocks. Algorithm 2 provides the complete pseudocode of GRAM-ODE training.
Open Source Code Yes The code is available at https://github.com/zbliu98/GRAM-ODE
Open Datasets Yes We show the performance results of our model on six widely used public benchmark traffic datasets 1 : PEMS03, PEMS04, PEMS07, and PEMS08 released by (Song et al., 2020) as well as PEMS-BAY (Li et al., 2017) and METR-LA (Jagadish et al., 2014). 1 Datasets are downloaded from STSGCN github repository https://github.com/Davidham3/STSGCN/
Dataset Splits Yes Following the previous works in this domain, we perform experiments by splitting the entire dataset into 6:2:2 for train, validation, and test sets. This split follows a temporal order, using the first 60% of the time length for training, and the subsequent 20% each for validation and testing.
Hardware Specification Yes All experiments are implemented using Py Torch (Paszke et al., 2019) and trained using Quadro RTX 8000 GPU, with 48GB of RAM.
Software Dependencies No All experiments are implemented using Py Torch (Paszke et al., 2019) and trained using Quadro RTX 8000 GPU, with 48GB of RAM.
Experiment Setup Yes DTW threshold (ϵ) in Eq. (2) is 0.1; number of channels (C) in the historical data is 3 (i.e., flow, speed, and occupation) and in the embedding space is 64. The shared temporal weights Ws1, Ws2 R12 12 are initialized randomly from normal distribution. The length of latent space for the input of local ODE block is L = 4, and in the final attention module, number of attention heads h = 12. During training, we use the learning rate of 10 4, 10 4, 10 5, 10 5, 10 4, and 10 4 for PEMS03, PEMS04, PEMS07, PEMS08, PEMSBAY, and METR-LA datasets, respectively. The optimizer is Adam W.