Reconstruction for Powerful Graph Representations

Authors: Leonardo Cotta, Christopher Morris, Bruno Ribeiro

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we show the extent to which graph reconstruction reconstructing a graph from its subgraphs can mitigate the theoretical and practical problems currently faced by GRL architectures. First, we leverage graph reconstruction to build two new classes of expressive graph representations. Secondly, we show how graph reconstruction boosts the expressive power of any GNN architecture while being a (provably) powerful inductive bias for invariances to vertex removals. Empirically, we show how reconstruction can boost GNN s expressive power while maintaining its invariance to permutations of the vertices by solving seven graph property tasks not solvable by the original GNN. Further, we demonstrate how it boosts state-of-the-art GNN s performance across nine real-world benchmark datasets.
Researcher Affiliation Academia Leonardo Cotta Purdue University cotta@purdue.edu Christopher Morris Mila Quebec AI Institute, Mc Gill University chris@christophermorris.info Bruno Ribeiro Purdue University ribeiro@cs.purdue.edu
Pseudocode No The paper describes mathematical formulations and discusses algorithms but does not present any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes All results are fully reproducible from the source and are available at https://github.com/ Purdue MINDS/reconstruction-gnns.
Open Datasets Yes To address Q2 and Q3, we evaluated k-Reconstruction GNNs on a diverse set of large-scale, standard benchmark instances [49, 74]. Specifically, we used the ZINC (10K) [32], ALCHEMY (10K) [23], OGBG-MOLFREESOLV, OGBG-MOLESOL, and OGBG-MOLLIPO [49] regression datasets. For the case of graph classification, we used OGBG-MOLHIV, OGBG-MOLPCBA, OGBG-TOX21, and OGBG-TOXCAST [49].
Dataset Splits No The paper states that it 'replicated the exact architectures from the original paper' and 'retain all hyperparameters and training procedures from the original GNNs', implying standard splits for well-known datasets, but it does not explicitly state the train/validation/test splits within this paper.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'PyTorch Geometric implementation [35]' but does not provide specific version numbers for PyTorch Geometric or any other software dependencies.
Experiment Setup Yes We always replicated the exact architectures from the original paper, building on the respective Py Torch Geometric implementation [35]. For the OGBG regression datasets, we noticed how using a jumping knowledge layer yields better validation and test results for GIN and GCN. Thus we made this small change. For each of these three architectures, we implemented k-Reconstruction GNNs for k in {n 1, n 2, n 3, n/2 } using a Deep Sets function [112] over the exact same original GNN architecture. For more details, see Appendix G. Experimental setup. To establish fair comparisons, we retain all hyperparameters and training procedures from the original GNNs to train the corresponding k-Reconstruction GNNs.