Fine-grained Expressivity of Graph Neural Networks

Authors: Jan Böker, Ron Levie, Ningyuan Huang, Soledad Villar, Christopher Morris

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we validate our theoretical findings by showing that randomly initialized MPNNs, without training, exhibit competitive performance compared to their trained counterparts. Moreover, we evaluate different MPNN architectures based on their ability to preserve graph distances, highlighting the significance of our continuous 1-WL test in understanding MPNNs expressivity. ... 6 Experimental evaluation In the following, we investigate the applicability of our theory on real-world prediction tasks.
Researcher Affiliation Academia Jan Böker RWTH Aachen University Ron Levie Technion Israel Institute of Technology Ningyuan Huang Johns Hopkins University Soledad Villar Johns Hopkins University Christopher Morris RWTH Aachen University
Pseudocode No The paper describes algorithms and methods in text and mathematical formulas but does not include explicitly labeled pseudocode blocks or algorithms.
Open Source Code Yes The source code of all methods and evaluation protocols are available at https://github.com/ nhuang37/finegrain_expressivity_GNN.
Open Datasets Yes We benchmark on a subset of the established TUDataset [90].
Dataset Splits Yes For each dataset, we run paired experiments of trained and untrained MPNNs on the same ten random splits (train/test) and 10-fold cross-validation splits, using the evaluation protocol outlined in Morris et al. [90].
Hardware Specification Yes We conducted all experiments on a server with 256 GB RAM and four NVIDIA RTX A5000 GPU cards.
Software Dependencies No The paper describes the MPNN architectures and training processes but does not provide specific version numbers for software dependencies (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We compare popular MPNN architectures, i.e., GIN and Graph Conv, with their untrained counterparts. For untrained MPNNs, we freeze their input and hidden layer weights that are randomly initialized and only optimize for the output layer(s) used for the final prediction. ... Table 1: Untrained MPNNs show competitive performance as trained MPNNs given sufficiently large hidden dimensionality (3-layer, 512-hidden-dimension). ... Figure 2 visualizes their normalized embedding distance and normalized graph distance, with an increasing number of hidden dimensions, from left to right. ... We observe similar behavior when increasing the number of layers; see Figure 3 in the appendix.