Graph Mixture Density Networks

Authors: Federico Errica, Davide Bacciu, Alessio Micheli

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on a new benchmark application that leverages random graphs for stochastic epidemic simulations. We show a significant improvement in the likelihood of epidemic outcomes when taking into account both multimodality and structure. The empirical analysis is complemented by two real-world regression tasks showing the effectiveness of our approach in modeling the output prediction uncertainty.
Researcher Affiliation Academia 1Department of Computer Science, University of Pisa. Correspondence to: Federico Errica <federico.errica@phd.unipi.it>, Davide Bacciu <bacciu@di.unipi.it>, Alessio Micheli <micheli@di.unipi.it>.
Pseudocode No The paper describes the EM framework for training and provides mathematical equations, but it does not include a distinct pseudocode or algorithm block.
Open Source Code Yes We publicly release large datasets of stochastic SIR simulations... https://github.com/diningphil/graph-mixture-density-networks
Open Datasets Yes We publicly release large datasets of stochastic SIR simulations... We simulated the well-known stochastic SIR epidemiological model on Barabasi-Albert graphs of size 100 (BA-100)... We also carry out simulations for Erdos-Renyi graphs (ER-100)... We will evaluate our model on the large chemical benchmarks alchemy_full (Chen et al., 2019) and ZINC_full (Irwin et al., 2012; Bresson & Laurent, 2019)
Dataset Splits Yes We assess the performance of different models using a holdout strategy for all datasets (80%/10%/10% split). ...in these final training runs we use early stopping on a validation set extracted from the training set (10% of the training data).
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments.
Software Dependencies No The paper mentions 'Adam Optimizer' and 'the GIN convolution... adapted from Xu et al. (2019)', but does not provide specific version numbers for any software libraries or dependencies.
Experiment Setup Yes MDN: C {2,3,5}, hidden units per convolution {64}, neighborhood aggregation {sum}, graph readout {sum, mean}, α {1C, 1.05C}, epochs {2500}, Φi {Linear model}, Adam Optimizer with learning rate {0.0001}, full batch, patience {30}. GMDN: C {3,5}, graph convolutional layers {2,5,7}, hidden units per convolution {64}, neighborhood aggregation {sum}, graph readout {sum, mean}, α {1C, 1.05C}, epochs {2500}, Φi {Linear model}, Adam Optimizer with learning rate {0.0001}, full batch, patience {30}.