GOAt: Explaining Graph Neural Networks via Graph Output Attribution
Authors: Shengyao Lu, Keith G. Mills, Jiao He, Bang Liu, Di Niu
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on synthetic and real-world data, we show that our method not only outperforms various state-of-the-art GNN explainers in terms of the commonly used fidelity metric, but also exhibits stronger discriminability, and stability by a remarkable margin. |
| Researcher Affiliation | Collaboration | Shengyao Lu1, Keith G. Mills1, Jiao He2, Bang Liu3, Di Niu1 1Department of Electrical and Computer Engineering, University of Alberta 2Kirin AI Algorithm & Solution, Huawei 3DIRO, Université de Montréal & Mila |
| Pseudocode | No | The paper describes the mathematical formulations of the method but does not include a distinct pseudocode block or algorithm. |
| Open Source Code | Yes | Code can be found at: https://github.com/sluxsr/GOAt. The code is available in the Supplementary Material, provided alongside this Appendix file. |
| Open Datasets | Yes | For graph classification task, we evaluate on a synthetic dataset, BA-2motifs (Luo et al., 2020), and two real-world datasets, Mutagenicity (Kazius et al., 2005) and NCI1 (Pires et al., 2015). For node classification task, we evaluate on three synthetic datasets (Luo et al., 2020), which are BA-shapes, BA-Community and Tree-grid. |
| Dataset Splits | Yes | The GNNs are trained using the following data splits: 80% for the training set, 10% for the validation set, and 10% for the testing set. |
| Hardware Specification | Yes | All experiments are conducted on an Intel Core i7-10700 Processor and NVIDIA Ge Force RTX 3090 Graphics Card. |
| Software Dependencies | No | The paper mentions using GNNs and various explainers but does not specify any software libraries or frameworks with version numbers (e.g., PyTorch 1.x, Python 3.x). |
| Experiment Setup | Yes | The GNN architectures consist of 3 message-passing layers and a 2-layer classifier. The hidden dimension is set to 32 for BA-2Motifs, BA-Shapes, BA-Community, Tree-grid, and 64 for Mutagenicity and NCI1. |