CrossGNN: Confronting Noisy Multivariate Time Series Via Cross Interaction Refinement

Authors: Qihe Huang, Lei Shen, Ruixin Zhang, Shouhong Ding, Binwu Wang, Zhengyang Zhou, Yang Wang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on 8 real-world MTS datasets demonstrate the effectiveness of Cross GNN compared with state-of-the-art methods.
Researcher Affiliation Collaboration University of Science and Technology of China (USTC), Hefei, China Suzhou Institute for Advanced Research, USTC, Suzhou, China Youtu Laboratory, Tencent, Shanghai, China
Pseudocode No The paper describes the methodology using mathematical equations and textual explanations, but does not include pseudocode or a clearly labeled algorithm block.
Open Source Code Yes The code is available at https://github.com/hqh0728/Cross GNN.
Open Datasets Yes We conduct extensive experiments on 8 real-world datasets following [27], including Weather, Traffic, Exchange Rate, Electricty and 4 ETT datasets(ETTh1, ETTh2, ETTm1 and ETTm2). Table 4: The statistics of the datasets for MTS forecasting. Datasets ETTh1 ETTh2 ETTm1 ETTm2 Weather Traffic Exchange-rate Electricity
Dataset Splits Yes We follow the standard protocol in [27] and split datasets into training, validation and test set by the ratio of 6:2:2 for the last 4 ETT datasets, and 7:1:2 for the other datasets.
Hardware Specification Yes The model is implemented in Py Torch 1.8.0 and trained on a single NVIDIA Tesla V100 PCIe GPU with 16GB memory. Comparison experiments are implemented on a Intel(R) 8255C CPU @ 2.50GHZ with 40GB memory, centos 7.8, and TVM 1.0.0.
Software Dependencies Yes The model is implemented in Py Torch 1.8.0 and trained on a single NVIDIA Tesla V100 PCIe GPU with 16GB memory. Comparison experiments are implemented on a Intel(R) 8255C CPU @ 2.50GHZ with 40GB memory, centos 7.8, and TVM 1.0.0.
Experiment Setup Yes All the models are following the same experimental setup with prediction length T {96, 192, 336, 720} for all datasets as in the original papers. We set the scale numbers S to 5 and set K to 10 for all datasets, as sensitivity experiments have shown that S does not have a significant impact beyond 5 and Cross GNN is not sensitive to K. Additionally, the mean squared error (MSE) is used as the loss function. For the learning rate, a grid search is conducted among [5e-3, 1e-3, 5e-4, 1e-4, 5e-5, 1e-5] to obtain the most suitable learning rate for all datasets. Besides, the dimension of the channel is set as 8 for smaller datasets and 16 for larger datasets, respectively. The training would be terminated early if the validation loss does not decrease for three consecutive rounds.