Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Generalization, Expressivity, and Universality of Graph Neural Networks on Attributed Graphs
Authors: Levi Rauchwerger, Stefanie Jegelka, Ron Levie
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, our pseudometric correlates with output perturbations of the MPNN, allowing to judge stability. ... As a proof of concept, we empirically test the correlation between δL DIDM and distance in the output of MPNNs. |
| Researcher Affiliation | Academia | 1Technion IIT, Faculty of Mathematics, 2MIT, Dept of EECS and CSAIL, 3TUM, School of CIT, MCML, MDSI EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes mathematical definitions and theoretical proofs for GNN properties. While algorithms like the Weisfeiler-Leman test are described conceptually, there are no explicitly labeled 'Pseudocode' or 'Algorithm' blocks with structured steps. |
| Open Source Code | Yes | The code is available at https://github.com/levi776/GNN-G-E-U. |
| Open Datasets | Yes | We used the MUTAG dataset (Morris et al., 2020) and followed the same random data split as in Chen et al. (2022); B oker et al. (2023)... We empirically test the correlation between δL DIDM and distance in the output of MPNNs on MUTAG and PROTEINS databases. |
| Dataset Splits | Yes | We used the MUTAG dataset (Morris et al., 2020) and followed the same random data split as in Chen et al. (2022); B oker et al. (2023): 90 percent of the data for training and 10 percent of the data for testing. We repeat the random split ten times. |
| Hardware Specification | No | The paper discusses various experimental setups including types of MPNN models, datasets, and hyperparameters, but it does not specify any particular hardware components such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper mentions MPNN architectures like GIN and Graph Conv, but it does not provide specific software dependency versions (e.g., PyTorch 1.9, Python 3.8). |
| Experiment Setup | Yes | We empirically test the correlation between δL DIDM and distance in the output of MPNNs. ... We conducted the experiment twice, once with a constant feature for all nodes and once with a signal which has a different constant value on each community of the graph. ... We present the results of the experiments when varying hidden dimension ... Figure 6 and Figure 7 show the results when varying the number of layers ... The GIN meanpool model is a variant of the Graph Isomorphism Network (GIN) ... Each layer consists of normalized sum aggregation and a multi layer perceptron (MLP). The first MLP consists of two linear transformations, Re LU activations, and batch normalization. ... The GC meanpool model is a realization of graph convolution network (GCN) for graph-level representation learning. Each layer consists of normalized sum aggregation with a linear message and update functions. |