On the Expressive Power of Spectral Invariant Graph Neural Networks

Authors: Bohang Zhang, Lingxiao Zhao, Haggai Maron

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we empirically evaluate the expressive power of various GNN architectures studied in this paper. We adopt the BREC benchmark (Wang & Zhang, 2023), a comprehensive dataset for comparing the expressive power of GNNs. We focus on the following GNNs that are closely related to this paper: (i) Graphormer (Ying et al., 2021) (a distancebased GNN that uses SPD, see Section 5); (ii) NGNN (Zhang & Li, 2021) (a variant of subgraph GNN, see Section 4.1); (ii) ESAN (Bevilacqua et al., 2022) (an advanced subgraph GNN that adds cross-graph aggregations, see Section 4.1); (iv) PPGN (Maron et al., 2019a) (a higher-order GNN, see Section 6.3); (v) EPNN (this paper). We follow the same setup as in Wang & Zhang (2023) in both training and evaluation. For all baseline GNNs, the reported numbers are directly borrowed from Wang & Zhang (2023); For EPNN, we run the model 10 times with different seeds and report the average performance. Table 1. Empirical performance of different GNNs on BREC.
Researcher Affiliation Collaboration Bohang Zhang 1 Lingxiao Zhao 2 Haggai Maron 3 4 1Peking University 2Carnegie Mellon University 3Technion 4NVIDIA Research. Correspondence to: Bohang Zhang <zhangbohang@pku.edu.cn>, Haggai Maron <hmaron@nvidia.com>.
Pseudocode No The paper uses mathematical equations to describe algorithms and refinement rules (e.g., Equation 1, 2, 3), but does not present them in a labeled pseudocode or algorithm block.
Open Source Code Yes Our code can be found in the following github repo: https://github.com/Lingxiao Shawn/EPNN-Experiments
Open Datasets Yes We adopt the BREC benchmark (Wang & Zhang, 2023), a comprehensive dataset for comparing the expressive power of GNNs.
Dataset Splits No We follow the same setup as in Wang & Zhang (2023) in both training and evaluation. For all baseline GNNs, the reported numbers are directly borrowed from Wang & Zhang (2023); For EPNN, we run the model 10 times with different seeds and report the average performance.
Hardware Specification No No specific hardware details (e.g., GPU models, CPU types, memory specifications) used for running the experiments are provided in the paper.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies (e.g., programming languages, libraries, or frameworks) used in the experiments.
Experiment Setup No The paper states, "We follow the same setup as in Wang & Zhang (2023) in both training and evaluation," but does not explicitly provide hyperparameter values or specific training configurations within the text.