Breaking the Limits of Message Passing Graph Neural Networks

Authors: Muhammet Balcilar, Pierre Heroux, Benoit Gauzere, Pascal Vasseur, Sebastien Adam, Paul Honeine

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we show that if the graph convolution supports are designed in spectral-domain by a nonlinear custom function of eigenvalues and masked with an arbitrary large receptive field, the MPNN is theoretically more powerful than the 1-WL test and experimentally as powerful as a 3-WL existing models, while remaining spatially localized.
Researcher Affiliation Collaboration 1LITIS Lab, University of Rouen Normandy, France 2Inter Digital, France 3LITIS Lab, INSA Rouen Normandy, France 4MIS Lab, Universit e de Picardie Jules Verne, France.
Pseudocode Yes Algorithm 1 GNNML3 Preprocessing Step" and "Algorithm 2 GNNML3 Forward calculation
Open Source Code Yes All codes and datasets are available online 1. https://github.com/balcilar/gnn-matlang
Open Datasets Yes All codes and datasets are available online 1. https://github.com/balcilar/gnn-matlang" and "graph8c and sr25 datasets2. http://users.cecs.anu.edu.au/ bdm/data/graphs.html" and "EXP dataset (Abboud et al., 2020)" and "Random Graph dataset (Chen et al., 2020)
Dataset Splits Yes We split the dataset into 400, 100, and 100 pairs for train, validation and test sets respectively." and "We used the Random Graph dataset (Chen et al., 2020) with same partitioning: 1500, 1000 and 2500 graphs for train, validation and test respectively.
Hardware Specification No None found.
Software Dependencies No None found.
Experiment Setup Yes We use 3-layer graph convolution followed by sum readout layer, and then a linear layer to convert the readout layer representation into a 10-length feature vector. We keep the parameter budget around 30K for all methods." and "We use 4 convolution layers, a graph readout layer computing a sum and followed by 2 fully connected layers. All methods parameter budget is around 30K. We keep the maximum number of iterations to 200 and we stop the algorithm if the error goes below 10 4.