How Powerful are K-hop Message Passing Graph Neural Networks
Authors: Jiarui Feng, Yixin Chen, Fuhai Li, Anindya Sarkar, Muhan Zhang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct extensive experiments to evaluate the performance of KP-GNN. Specifically, we 1) empirically verify the expressive power of KP-GNN on 3 simulation datasets and demonstrate the benefits of KP-GNN compared to normal K-hop message passing GNNs; 2) demonstrate the effectiveness of KP-GNN on identifying various node properties, graph properties, and substructures with 3 simulation datasets; 3) show that the KP-GNN can achieve state-of-the-art performance on multiple real-world datasets; 4) analyze the running time of KP-GNN. |
| Researcher Affiliation | Academia | Jiarui Feng1,2 Yixin Chen1 Fuhai Li2 Anindya Sarkar1 Muhan Zhang3,4 {feng.jiarui, fuhai.li, anindya}@wustl.edu, chen@cse.wustl.edu, muhan@pku.edu.cn 1Department of CSE, Washington University in St. Louis 2Institute for Informatics, Washington University School of Medicine 3Institute for Artificial Intelligence, Peking University 4Beijing Institute for General Artificial Intelligence |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | Our code is available at https://github.com/Jiarui Feng/KP-GNN. |
| Open Datasets | Yes | To evaluate the expressive power of KP-GNN, we choose: 1) EXP dataset [31], which contains 600 pairs of non-isomorphic graphs (1-WL failed). [...] 2) SR25 dataset [39] [...] 3) CSL dataset [40] [...] To demonstrate the capacity of KP-GNN on counting node/graph properties and substructures, we pick 1) Graph property regression [...] on random graph dataset [41]. 2) Graph substructure counting [...] on random graph dataset [42]. To evaluate the performance of KP-GNN on real-world datasets, we select 1) MUTAG [43], D&D [44], PROTEINS [44], PTC-MR [45], and IMDB-B [46] from TU database. 2) QM9 [47, 48] and ZINC [49] for molecular properties prediction. |
| Dataset Splits | No | The paper does not explicitly provide training/validation/test dataset splits in the main text. It mentions, 'The detailed experimental setting is described in Appendix J' and 'The detailed statistics of the datasets are described in Appendix L', deferring the information to appendices not provided. |
| Hardware Specification | No | No specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running experiments are provided in the main body of the paper. |
| Software Dependencies | No | The paper states: 'We implement the KP-GNN with Py Torch Geometric package [38].' However, it does not provide specific version numbers for PyTorch Geometric or any other software dependencies needed for replication. |
| Experiment Setup | No | The paper does not provide specific experimental setup details, such as hyperparameter values or training configurations, in the main text. It states: 'The detail of each variant of KP-GNN is described in Appendix I and the detailed experimental setting is described in Appendix J', deferring this information to appendices not provided. |