Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Global Concept-Based Interpretability for Graph Neural Networks via Neuron Analysis
Authors: Han Xuanyuan, Pietro Barbiero, Dobrik Georgiev, Lucie Charlotte Magister, Pietro LiΓ²
AAAI 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments on four popular real-world benchmark datasets for binary graph classification: MUTAG (Debnath et al. 1991), Reddit-Binary (Yanardag and Vishwanathan 2015), PROTEINS (Borgwardt et al. 2005) and IMDB-Binary (Yanardag and Vishwanathan 2015). |
| Researcher Affiliation | Academia | University of Cambridge EMAIL EMAIL |
| Pseudocode | No | The main paper text does not contain structured pseudocode or algorithm blocks. It references 'The full algorithm and implementation details are provided in Appendix A2.' which is outside the provided text. |
| Open Source Code | Yes | The code is available at https://github.com/xuyhan/gnn-dissect. |
| Open Datasets | Yes | We perform experiments on four popular real-world benchmark datasets for binary graph classification: MUTAG (Debnath et al. 1991), Reddit-Binary (Yanardag and Vishwanathan 2015), PROTEINS (Borgwardt et al. 2005) and IMDB-Binary (Yanardag and Vishwanathan 2015). |
| Dataset Splits | No | The paper mentions 'different train-test splits' but does not provide specific percentages, sample counts, or detailed splitting methodology within the provided text. It defers exact descriptions to 'Appendix C'. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running experiments are provided in the text. |
| Software Dependencies | No | The paper mentions using GIN, GCN, and DIG for XGNN, but does not provide specific version numbers for any software dependencies (e.g., libraries, frameworks, or solvers). |
| Experiment Setup | No | The paper states, 'The exact descriptions of the models and their training parameters are shown in Appendix C.', but these details are not provided within the main text of the paper. |