FourierGNN: Rethinking Multivariate Time Series Forecasting from a Pure Graph Perspective

Authors: Kun Yi, Qi Zhang, Wei Fan, Hui He, Liang Hu, Pengyang Wang, Ning An, Longbing Cao, Zhendong Niu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on seven datasets have demonstrated our superior performance with higher efficiency and fewer parameters compared with state-of-the-art methods.
Researcher Affiliation Academia 1Beijing Institute of Technology, 2Tongji University, 3University of Oxford 4University of Macau, 5He Fei University of Technology, 6Macquarie University
Pseudocode No The paper describes methods through mathematical equations and architectural diagrams but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at this repository: https://github.com/aikunyi/Fourier GNN.
Open Datasets Yes We use seven public multivariate benchmarks for multivariate time series forecasting and these benchmark datasets are summarized in Table 7. (Table 7 lists datasets like ECG, Solar, Electricity, COVID-19, METR-LA with URLs/citations in footnotes) and ECG3: This dataset is about Electrocardiogram(ECG) from the UCR time-series classification archive [38].
Dataset Splits Yes Except the COVID-19 dataset, we split the other datasets into training, validation, and test sets with the ratio of 7:2:1 in a chronological order. For the COVID-19 dataset, the ratio is 6:2:2.
Hardware Specification Yes All experiments are conducted in Python using Pytorch 1.8 [37] (except for SFM [24] which uses Keras) and performed on single NVIDIA RTX 3080 10G GPU.
Software Dependencies Yes All experiments are conducted in Python using Pytorch 1.8 [37] (except for SFM [24] which uses Keras)
Experiment Setup Yes Our model is trained using RMSProp with a learning rate of 10 5 and MSE (Mean Squared Error) as the loss function. ... Specifically, the embedding size and batch size are tuned over {32, 64, 128, 256, 512} and {2, 4, 8, 16, 32, 64, 128} respectively.