Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph Matching

Authors: Fang Wu, Siyuan Li, Xurui Jin, Yinghui Jiang, Dragomir Radev, Zhangming Niu, Stan Z. Li

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on synthetic and real-world datasets show the effectiveness of our Match Explainer by outperforming all state-of-the-art parametric baselines with significant margins. Results also demonstrate that Match Drop is a general scheme to be equipped with GNNs for enhanced performance.
Researcher Affiliation Collaboration 1School of Engineering, Westlake University, Hangzhou, China 2Mindrank AI, Hangzhou, China 3Department of Computer Science, Yale University, New Haven, United States.
Pseudocode Yes Algorithm 1 Workflow of Match Explainer
Open Source Code Yes The code is available at https://github. com/smiles724/Match Explainer.
Open Datasets Yes Following Wang et al. (2021b), we use four standard datasets... MUTAG (Debnath et al., 1991; Kazius et al., 2005)... BA-3Motif... MNIST... VG-5 (Pope et al., 2019; Krishna et al., 2017).
Dataset Splits No The paper mentions 'full training and validation data as the reference set' and 'testing accuracy' but does not provide specific percentages or counts for training, validation, and test splits.
Hardware Specification Yes All experiments are conducted on a single A100 PCIE GPU (40GB).
Software Dependencies No The paper mentions using 'Adam optimizer' but does not provide specific version numbers for software dependencies such as programming languages, libraries, or frameworks.
Experiment Setup Yes Regarding the re-implementation of Refine in BA-3Motif, we use the original code with the same hyperparameters, and we adopt Adam optimizer (Kingma & Ba, 2014) and set the learning rate of pre-training and fine-tuning as 1e-3 and 1e-4, respectively.