Stratified GNN Explanations through Sufficient Expansion

Authors: Yuwen Ji, Lei Shi, Zhimeng Liu, Ge Wang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment results on both synthetic and real-world datasets demonstrate the superiority of our stratified explainer on standard interpretability tasks and metrics such as fidelity and explanation recall, with an average improvement of 11% and 8% over the best alternative on each data type.
Researcher Affiliation Academia 1Beihang University, Beijing, China 2University of Science and Technology Beijing, Beijing, China
Pseudocode Yes Algorithm 1: Training for explaining GNN at level-l
Open Source Code No The paper does not provide an explicit statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes We select the widely used Mutagenicity dataset (MUTAG) (Kazius, Mc Guire, and Bursi 2005; Riesen and Bunke 2008; Ying et al. 2019), which contains 4,337 molecular graphs labeled with two classes based on mutagenic effect.
Dataset Splits No The paper mentions using specific datasets (MUTAG, QMOFs, BA-Hierarchy-motif) but does not provide explicit details on their train/validation/test splits, such as percentages or sample counts.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions specific GNN models (GCN, Schnet) but does not provide version numbers for any software dependencies, programming languages, or libraries used in the experiments.
Experiment Setup No The paper states that 'grid search to find the best hyperparameters for all methods' was used, but it does not explicitly provide the specific values for these hyperparameters or other details of the experimental setup, such as learning rates, batch sizes, or optimizer configurations, in the main text.