The Infinite Contextual Graph Markov Model

Authors: Daniele Castellana, Federico Errica, Davide Bacciu, Alessio Micheli

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On 8 graph classification tasks, we show that ICGMM: i) successfully recovers or improves CGMM s performances while reducing the hyperparameters search space; ii) performs comparably to most end-to-end supervised methods.The Infinite Contextual Graph Markov Model; We compare ICGMM against CGMM as well as end-to-end supervised methods on eight different graph classification tasks, following a fair, robust and reproducible experimental procedure (Errica et al., 2020).
Researcher Affiliation Collaboration 1Department of Computer Science, University of Pisa, Italy 2NEC Laboratories Europe, Heidelberg, Germany 3Work primarily done as a Ph D student at the University of Pisa.
Pseudocode Yes For the interested reader, we report the ICGMM complete Gibbs sampling equations and pseudo-code in Appendix A and B, respectively. (Algorithm 1 is presented in Appendix B)
Open Source Code Yes 1The code to rigorously reproduce our results is provided here: https://github.com/diningphil/i CGMM.
Open Datasets Yes All datasets are publicly available (Kersting et al., 2016) and their statistics are summarized in Appendix C.
Dataset Splits Yes It consists of an external 10-fold cross validation for model assessment, followed by an internal hold-out model selection for each of the external folds. Stratified data splits were already provided
Hardware Specification No A fair time comparison between all models requires to look at the time to result using the same resources in our case CPUs. No specific CPU or GPU models, memory amounts, or detailed computer specifications are provided.
Software Dependencies No Finally, we relied on Pytorch Geometric (Fey & Lenssen, 2019) for the implementation of our method. No specific version number is provided for Pytorch Geometric.
Experiment Setup Yes Number of layers {5, 10, 15, 20}, Unibigram aggregation {sum, mean}, Gibbs sampling iterations {100} for ICGMM and {10, 20, 50, 100} for ICGMMαγ, α0 {1, 5}, γ {1, 2, 3} (Only for ICGMMαγ); Adam optimizer with batch size 32 and learning rate 1e-3, Hidden units {32, 128}, L2 regularization {0., 5e-4}, epochs {2000}, early stopping on validation accuracy, with patience 300 on chemical tasks and 100 on social ones.