Rethinking the Expressive Power of GNNs via Graph Biconnectivity

Authors: Bohang Zhang, Shengjie Luo, Liwei Wang, Di He

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A set of experiments on both synthetic and real datasets demonstrates that our approach can consistently outperform prior GNN architectures. In this section, we perform empirical evaluations of our proposed Graphormer-GD.
Researcher Affiliation Academia 1National Key Laboratory of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University 2Center for Data Science, Peking University
Pseudocode Yes Algorithm 1: The 1-dimensional Weisfeiler-Lehman Algorithm, Algorithm 2: The k-dimensional Folklore Weisfeiler-Lehman Algorithm, Algorithm 3: DSS Weisfeiler-Lehman Algorithm, Algorithm 4: The Genealized Distance Weisfeiler-Lehman Algorithm
Open Source Code Yes The code and models will be made publicly available at https://github.com/lsj2408/Graphormer-GD.
Open Datasets Yes We further study the empirical performance of our Graphormer-GD on the real-world benchmark: ZINC from Benchmarking-GNNs (Dwivedi et al., 2020).
Dataset Splits Yes We follow Li et al. (2020) to split the nodes of each graph into train/validation/test subsets with the ratio being 0.8/0.1/0.1, respectively.
Hardware Specification Yes All models are trained on 1 NVIDIA Tesla V100 GPU. All models are trained on 4 NVIDIA Tesla V100 GPUs.
Software Dependencies No The paper mentions 'Adam W (Kingma & Ba, 2014)' but does not specify version numbers for other software dependencies like Python, PyTorch, or CUDA.
Experiment Setup Yes The dimension of hidden layers and feed-forward layers is set to 768. The number of Gaussian Basis kernels is set to 128. The number of attention heads is set to 64. The batch size is set to 32. We use Adam W (Kingma & Ba, 2014) as the optimizer and set its hyperparameter ϵ to 1e-8 and (β1, β2) to (0.9, 0.999). The peak learning rate is set to 9e-5. The model is trained for 100k steps with a 6K-step warm-up stage.