Transfer Graph Neural Networks for Pandemic Forecasting
Authors: George Panagopoulos, Giannis Nikolentzos, Michalis Vazirgiannis4838-4845
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate the superiority of our method, highlighting the usefulness of GNNs in epidemiological prediction. |
| Researcher Affiliation | Academia | George Panagopoulos1, Giannis Nikolentzos 2, Michalis Vazirgiannis 1 1 Ecole Polytechnique, Palaiseau, France 2 Athens University of Economics and Business, Athens, Greece george.panagopoulos@polytechnique.edu, {nikolentzos, mvazirg}@aueb.gr |
| Pseudocode | Yes | Algorithm 1 MPNN+TL |
| Open Source Code | Yes | The number of cases in the different regions of the 4 considered countries was collected from open data listed in the github repository4 along with the code and the aggregated mobility data. 4https://github.com/geopanag/pandemic tgnn |
| Open Datasets | Yes | Facebook has released several datasets in the scope of Data For Good program2 to help researchers better understand the dynamics of the COVID-19 and forecast the spread of the disease (Maas et al. 2019). We use a dataset that consists of measures of human mobility between administrative NUTS33 regions. The number of cases in the different regions of the 4 considered countries was collected from open data listed in the github repository4 along with the code and the aggregated mobility data. 2https://dataforgood.fb.com/tools/disease-prevention-maps/ 4https://github.com/geopanag/pandemic tgnn |
| Dataset Splits | Yes | Furthermore, for each value of T, to identify the best model, we build a validation set which contains the samples corresponding to days T 1, T 3, T 5, T 7 and T 9, such that the training and validation sets have no overlap with the test set. |
| Hardware Specification | No | The paper does not specify the hardware used for experiments (e.g., specific GPU/CPU models, memory, or cloud instance types). |
| Software Dependencies | No | The paper states 'All the models are implemented with Py Torch (Paszke et al. 2019)' but does not provide a specific version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We train the models for a maximum of 500 epochs with early stopping after 50 epochs of patience. Early stopping starts to occur from the 100th epoch and onward. We set the batch size to 8. We use the Adam optimizer with a learning rate of 10 3. We set the number of hidden units of the neighborhood aggregation layers to 64. Batch normalization and dropout are applied to the output of every neighborhood aggregation layer, with a dropout ratio of 0.5. For the MPNN+LSTM model, the dimensionality of the hidden states of the LSTMs is set equal to 64. |