Taming Local Effects in Graph-based Spatiotemporal Forecasting

Authors: Andrea Cini, Ivan Marisca, Daniele Zambon, Cesare Alippi

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Supported by strong empirical evidence, we provide insights and guidelines for specializing graph-based models to the dynamics of each time series and show how this aspect plays a crucial role in obtaining accurate predictions. and A comprehensive empirical analysis of the aforementioned phenomena in representative architectures across synthetic and real-world datasets.
Researcher Affiliation Academia Andrea Cini 1, Ivan Marisca 1, Daniele Zambon 1, Cesare Alippi 12 1 The Swiss AI Lab IDSIA USI-SUPSI, Università della Svizzera italiana, 2 Politecnico di Milano {andrea.cini, ivan.marisca, daniele.zambon, cesare.alippi}@usi.ch
Pseudocode No The paper describes mathematical formulations and architectural components using equations and textual descriptions, but does not include a dedicated pseudocode or algorithm block.
Open Source Code Yes The code needed to reproduce the reported results is available online2. 2https://github.com/Graph-Machine-Learning-Group/taming-local-effects-stgnns
Open Datasets Yes Traffic forecasting We consider two popular traffic forecasting datasets, namely METR-LA and PEMS-BAY [6], containing measurements from loop detectors in the Los Angeles County Highway and San Francisco Bay Area, respectively. For the experiment on transfer learning, we use the PEMS03, PEMS04, PEMS07, and PEMS08 datasets from Guo et al. [56] each collecting traffic flow readings... Electric load forecasting We selected the CER-E dataset [54]... Air quality monitoring The AQI [55] dataset collects hourly measurements...
Dataset Splits Yes For the GPVAR datasets we follow the procedure described in Sec. 7 to generate data and then partition the resulting time series in 70%/10%/20% splits for training, validation and testing, respectively. and For all datasets except for AQI, we divide the obtained windows sequentially into 70%/10%/20% splits for training, validation, and testing, respectively.
Hardware Specification Yes Experiments were run on a workstation equipped with AMD EPYC 7513 processors and four NVIDIA RTX A5000 GPUs.
Software Dependencies No Experimental setup and baselines have been developed with Python [57] by relying on the following open-source libraries: Py Torch [58]; Py Torch Lightning [59]; Py Torch Geometric [60] Torch Spatiotemporal [61]; numpy [62]; scikit-learn [63].
Experiment Setup Yes For the experiment in Tab.4, we set W = 12, H = 12 for the traffic datasets, W = 48, H = 6 for CER-E, and W = 24, H = 3 for AQI. [...] The number of neurons in each layer is set to 64 and the embedding size to 32 for all the reference architectures in all the benchmark datasets. [...] For GPVAR experiments we use a batch size of 128 and train with early stopping for a maximum of 200 epochs with the Adam optimizer [67] and a learning rate of 0.01 halved every 50 epochs.