Graph Neural Controlled Differential Equations for Traffic Forecasting

Authors: Jeongwhan Choi, Hwangyong Choi, Jeehyun Hwang, Noseong Park6367-6374

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments with 6 benchmark datasets and 20 baselines. STG-NCDE shows the best accuracy in all cases, outperforming all those 20 baselines by non-trivial margins.
Researcher Affiliation Academia Yonsei University, Seoul, South Korea {jeongwhan.choi, hwangyong753, hwanggh96, noseong}@yonsei.ac.kr
Pseudocode No The paper describes the proposed method using text and mathematical equations, but it does not include explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code or a direct link to a code repository for the methodology described. It refers to a full paper (Choi et al. 2021) for reproducibility, but no concrete access is given in the provided text.
Open Datasets Yes In the experiment, we use six real-world traffic datasets, namely Pe MSD7(M), Pe MSD7(L), Pe MS03, Pe MS04, Pe MS07, and Pe MS08, which were collected by California Performance of Transportation (Pe MS) (Chen et al. 2001) in real-time every 30 second and widely used in the previous studies (Yu, Yin, and Zhu 2018; Guo et al. 2019; Fang et al. 2021; Chen, Segovia-Dominguez, and Gel 2021; Song et al. 2020).
Dataset Splits Yes The datasets are already split with a ratio of 6:2:2 into training, validating, and testing sets.
Hardware Specification Yes Our software and hardware environments are as follows: UBUNTU 18.04 LTS, PYTHON 3.9.5, NUMPY 1.20.3, SCIPY 1.7, MATPLOTLIB 3.3.1, TORCHDIFFEQ 0.2.2, PYTORCH 1.9.0, CUDA 11.4, and NVIDIA Driver 470.42, i9 CPU, and NVIDIA RTX A6000.
Software Dependencies Yes Our software and hardware environments are as follows: UBUNTU 18.04 LTS, PYTHON 3.9.5, NUMPY 1.20.3, SCIPY 1.7, MATPLOTLIB 3.3.1, TORCHDIFFEQ 0.2.2, PYTORCH 1.9.0, CUDA 11.4, and NVIDIA Driver 470.42, i9 CPU, and NVIDIA RTX A6000.
Experiment Setup Yes For our method, we test with the following hyperparameter configurations: we train for 200 epochs using the Adam optimizer, with a batch size of 64 on all datasets. The two dimensionalities of dim(h(v)) and dim(z(v)) are {32, 64, 128, 256}, the node embedding size C is from 1 to 10, and the number of K in Eq. (8) is in {1, 2, 3}. The learning rate in all methods is in {1 10 2, 5 10 3, 1 10 3, 5 10 4, 1 10 4} and the weight decay coefficient is in {1 10 4, 1 10 3, 1 10 2}. An early stop strategy with a patience of 15 iterations on the validation dataset is used.