Global Concept-Based Interpretability for Graph Neural Networks via Neuron Analysis
Authors: Han Xuanyuan, Pietro Barbiero, Dobrik Georgiev, Lucie Charlotte Magister, Pietro Liò
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments on four popular real-world benchmark datasets for binary graph classification: MUTAG (Debnath et al. 1991), Reddit-Binary (Yanardag and Vishwanathan 2015), PROTEINS (Borgwardt et al. 2005) and IMDB-Binary (Yanardag and Vishwanathan 2015). |
| Researcher Affiliation | Academia | University of Cambridge hx263@cantab.ac.uk {pb737, dgg30, lcm67, pl219}@cam.ac.uk |
| Pseudocode | No | The main paper text does not contain structured pseudocode or algorithm blocks. It references 'The full algorithm and implementation details are provided in Appendix A2.' which is outside the provided text. |
| Open Source Code | Yes | The code is available at https://github.com/xuyhan/gnn-dissect. |
| Open Datasets | Yes | We perform experiments on four popular real-world benchmark datasets for binary graph classification: MUTAG (Debnath et al. 1991), Reddit-Binary (Yanardag and Vishwanathan 2015), PROTEINS (Borgwardt et al. 2005) and IMDB-Binary (Yanardag and Vishwanathan 2015). |
| Dataset Splits | No | The paper mentions 'different train-test splits' but does not provide specific percentages, sample counts, or detailed splitting methodology within the provided text. It defers exact descriptions to 'Appendix C'. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running experiments are provided in the text. |
| Software Dependencies | No | The paper mentions using GIN, GCN, and DIG for XGNN, but does not provide specific version numbers for any software dependencies (e.g., libraries, frameworks, or solvers). |
| Experiment Setup | No | The paper states, 'The exact descriptions of the models and their training parameters are shown in Appendix C.', but these details are not provided within the main text of the paper. |