The Surprising Power of Graph Neural Networks with Random Node Initialization

Authors: Ralph Abboud, İsmail İlkan Ceylan, Martin Grohe, Thomas Lukasiewicz

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we first evaluate the effect of RNI on MPNN expressiveness based on EXP, and compare against established higher-order GNNs. We then extend our analysis to CEXP. Our experiments use the following models:
Researcher Affiliation Academia Ralph Abboud1 , Ismail Ilkan Ceylan1 , Martin Grohe2 and Thomas Lukasiewicz1 1University of Oxford 2RWTH Aachen University
Pseudocode No The paper describes methods in text but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include an explicit statement about releasing source code for the methodology or a link to a code repository.
Open Datasets No The paper introduces synthetic datasets EXP and CEXP, stating 'We design EXP, a synthetic dataset requiring 2-WL expressive power for models to achieve above-random performance, and run MPNNs with RNI on it'. However, it does not provide concrete access information (link, DOI, specific citation) for these datasets, nor does it state they are publicly available.
Dataset Splits Yes We evaluate all models on EXP using 10-fold cross-validation.
Hardware Specification No Experiments were conducted on the Advanced Research Computing (ARC) cluster administered by the University of Oxford. This describes the computing environment but lacks specific hardware details such as GPU/CPU models.
Software Dependencies No The paper mentions models (GCN, MPNNs), non-linearities (ELU), and initialization methods (Xavier), but does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes A GCN with 8 distinct message passing iterations, ELU non-linearities [Clevert et al., 2016], 64-dimensional embeddings, and deterministic learnable initial node embeddings indicating node type. We train 3-GCN for 100 epochs per fold, and all other systems for 500 epochs. We evaluate this model with four initialization distributions, namely, the standard normal distribution N(0,1) (N), the uniform distribution over [ 1,1] (U), Xavier normal (XN), and the Xavier uniform distribution (XU) [Glorot and Bengio, 2010]. where 64x / 100 dimensions are initially randomized