GraFITi: Graphs for Forecasting Irregularly Sampled Time Series
Authors: Vijaya Krishna Yalavarthi, Kiran Madhusudhanan, Randolf Scholz, Nourhan Ahmed, Johannes Burchert, Shayan Jawed, Stefan Born, Lars Schmidt-Thieme
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Gra FITi has been tested on 3 real-world and 1 synthetic irregularly sampled time series dataset with missing values and compared with various stateof-the-art models. The experimental results demonstrate that Gra FITi improves the forecasting accuracy by up to 17% and reduces the run time up to 5 times compared to the state-of-the-art forecasting models. |
| Researcher Affiliation | Academia | Vijaya Krishna Yalavarthi1, Kiran Madhusudhanan1, Randolf Scholz1, Nourhan Ahmed1, Johannes Burchert1, Shayan Jawed1, Stefan Born2, Lars Schmidt-Thieme1 1 Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany 2 Institute of Mathematics, TU Berlin, Germany |
| Pseudocode | Yes | Algorithm 1: Graph Neural Network (gnn(l)) Algorithm 2: Forward pass of Gra FITi |
| Open Source Code | Yes | Implementation code: https: //github.com/yalavarthivk/Gra FITi. |
| Open Datasets | Yes | Physionet 12 (Silva et al. 2012) consists of ICU patient records observed for 48 hours. MIMIC-III (Johnson et al. 2016) is also a medical dataset that contains measurements of the ICU patients observed for 48 hours. MIMIC-IV (Johnson et al. 2021) is built upon the MIMIC-III database. USHCN (Menne, Williams Jr, and Vose 2015) is a climate dataset that consists of the measurements of daily temperatures, precipitation and snow observed over 150 years from 1218 meteorological stations in the USA. |
| Dataset Splits | Yes | We followed Scholz et al. (2023); Biloˇs et al. (2021); De Brouwer et al. (2019), applied 5-fold crossvalidation and selected hyperparameters using a holdout validation set (20%). |
| Hardware Specification | Yes | All the models were experimented using the Py Torch library on a Ge Force RTX-3090 GPU. |
| Software Dependencies | No | The paper mentions using "Py Torch library" but does not specify its version number or any other software dependencies with their versions, which is necessary for reproducibility. |
| Experiment Setup | Yes | We searched the following hyperparameters for Gra FITi: L {1, 2, 3, 4}, #heads in MAB from {1, 2, 4}, and hidden nodes in dense layers from {16, 32, 64, 128, 256}. We randomly sampled sets of 5 different hyperparameters and choose the one that has the best performance on validation dataset. We used the Adam optimizer with learning rate of 0.001, halving it when validation loss did not improve for 10 epochs. All models were trained for up to 200 epochs, using early stopping with a patience to 30 epochs. |