Zero-One Laws of Graph Neural Networks

Authors: Sam Adam-Day, Iliant, Ismail Ceylan

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically verify our theoretical findings on a carefully designed synthetic experiment using ER graphs with random features. To validate our theoretical findings, we conduct a series of experiments: since zero-one laws are of asymptotic nature, we may need to consider very large graphs to observe clear empirical evidence for the phenomenon. Surprisingly however, GNNs already exhibit clear evidence of a zero-one law even on small graphs.
Researcher Affiliation Academia Sam Adam-Day Department of Mathematics University of Oxford Oxford, UK sam.adam-day@cs.ox.ac.uk Theodor-Mihai Iliant Department of Computer Science University of Oxford Oxford, UK theodor-mihai.iliant@lmh.ox.ac.uk Ismail Ilkan Ceylan Department of Computer Science University of Oxford Oxford, UK ismail.ceylan@cs.ox.ac.uk
Pseudocode No No pseudocode or algorithm blocks were found.
Open Source Code Yes We make the code for our experiments available online at https://github.com/Sam Adam Day/Zero-One-Laws-of-Graph-Neural-Networks.
Open Datasets No The input graphs are drawn from G(n, 1/2) with corresponding node features independently drawn from U(0, 1). This describes how data is generated, not a publicly available dataset with concrete access information (link, DOI, citation with author/year).
Dataset Splits No No specific train/validation/test dataset splits (percentages or counts) were provided. The paper describes generating graphs of varying sizes for experiments.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were mentioned.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) were provided.
Experiment Setup Yes We consider 10 GNN models of the same architecture each with randomly initialized weights, where each weight is sampled independently from U(-1, 1). The non-linearity is eventually constant in both directions: identity between [-1, 1], and truncated to -1 if the input is smaller than -1, and 1 if the input is greater than 1. For this experiment, we use an embedding dimensionality of 128 for each GCN model and draw graphs of sizes up 5000, where we take 32 samples of each size. We conduct these experiments with three choices of layers: 10 models with T = 1 layer, 10 models with T = 2 layers, and 10 models with T = 3 layers.