Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks

Authors: Pascal Esser, Leena Chennuru Vankadara, Debarghya Ghoshdastidar

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental While we focus on the theoretical analysis of GNNs, in this section we illustrate that the empirical generalization error follows the trends given by the bounds described in Theorem 2. We empirically show that the test error is consistent with the trends predicted by the theoretical bound. Our results suggest that, under distributional assumptions, learning-theoretic bounds can explain behaviour of GNNs.
Researcher Affiliation Academia Pascal Mattia Esser Technical University of Munich esser@in.tum.de Leena C. Vankadara University of Tübingen leena.chennuru-vankadara@uni-tuebingen.de Debarghya Ghoshdastidar Technical University of Munich Munich Data Science Institute ghoshdas@in.tum.de
Pseudocode No No pseudocode or algorithm blocks were explicitly found or labeled in the paper.
Open Source Code No Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] But we use the official GCN implementation of Kipf et al. (2017) (linked in the supplemental material) for the GNN implementation and all experimental details are provided in the supplemental material.
Open Datasets Yes For the SBM we consider a graph with n = 500, m = 100 as default. ... on the example of the Cora dataset (Rossi et al. 2015).
Dataset Splits No The main paper does not explicitly state training/validation/test dataset splits. It refers to supplementary material for details, which is not provided in the main paper for analysis.
Hardware Specification No Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No] Experiments are not computationally intensive
Software Dependencies No No specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x) are provided in the main text of the paper.
Experiment Setup No The main paper states, 'Details on the experimental setup are given in the Appendix.' and 'Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See supplemental material'. However, these details are not present in the main paper itself.