Scalable Spatiotemporal Graph Neural Networks
Authors: Andrea Cini, Ivan Marisca, Filippo Maria Bianchi, Cesare Alippi
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on relevant datasets show that our approach achieves results competitive with the state of the art, while dramatically reducing the computational burden. |
| Researcher Affiliation | Academia | The Swiss AI Lab IDSIA, Universit a della Svizzera italiana 2 Ui T the Arctic University of Norway 3 NORCE Norwegian Research Centre 4 Politecnico di Milano |
| Pseudocode | No | The paper describes the methods using mathematical equations and prose but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | We provide an efficient open-source implementation of SGP together with the code to reproduce all the experiments3. 3https://github.com/Graph-Machine-Learning-Group/sgp |
| Open Datasets | Yes | In the first experiment we consider the METR-LA and PEMS-BAY datasets (Li et al. 2018), which are popular medium-sized benchmarks... The first dataset contains data coming from the Irish Commission for Energy Regulation Smart Metering Project (CER-E; Commission for Energy Regulation 2016)... The second large-scale dataset is obtained from the synthetic PV-US4 dataset (Hummon et al. 2012)... 4https://www.nrel.gov/grid/solar-power-data.html |
| Dataset Splits | Yes | We use the same preprocessing steps of previous works to extract a graph and obtain train, validation and test data splits (Wu et al. 2019). ...for both datasets, we consider the first 6 months of data (4 for months for training, 1 month for validation, and 1 month for testing). |
| Hardware Specification | Yes | The time required to encode the datasets with SGP s encoder ranges from tens of seconds to 4 minutes on an AMD EPYC 7513 processor with 32 parallel processes. ...we measure the time required for the update step of each model on an NVIDIA RTX A5000 GPU... Nvidia Corporation for the donation of two GPUs. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA 11.x). |
| Experiment Setup | Yes | In particular, each model is trained to predict the 12-step-ahead observations. ...for both datasets, we consider the first 6 months of data (4 for months for training, 1 month for validation, and 1 month for testing). ...we fix a maximum GPU memory budget of 12 GB and select the batch size accordingly. |