Factorized Explainer for Graph Neural Networks
Authors: Rundong Huang, Farhad Shirani, Dongsheng Luo
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on both synthetic and real-world datasets to validate the effectiveness of our proposed factorized explainer.Comprehensive empirical studies on both synthetic and real-life datasets demonstrate that our method can consistently improve the quality of the explanations. |
| Researcher Affiliation | Academia | Rundong Huang1, Farhad Shirani2 , Dongsheng Luo2* 1Technical University of Munich, Munich, Germany 2Florida International University, Miami, U.S. |
| Pseudocode | No | The paper states, 'A detailed algorithm can be found in Appendix.' However, the provided main paper text does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an explicit statement about releasing source code or a direct link to a code repository for its methodology. |
| Open Datasets | Yes | Six benchmark datasets with ground truth explanations are used for evaluation, with BA-Shapes, BA-Community, Tree-Circles, and Tree-Grid (Ying et al. 2019) for the node classification task, and BA-2motifs (Luo et al. 2020) and MUTAG (Debnath et al. 1991) for the graph classification task. |
| Dataset Splits | No | The paper mentions training a GNN model and refers to 'Detailed experimental setups' being in the Appendix, but the provided text does not specify exact train/validation/test dataset splits (e.g., percentages, sample counts, or specific predefined splits). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only describes the GNN model architecture and training procedures. |
| Software Dependencies | No | The paper names various explanation methods (e.g., GNNExplainer, PGExplainer) and mentions using a 'three-layer GNN', but it does not list specific software packages or libraries with their corresponding version numbers (e.g., 'PyTorch 1.9', 'Python 3.8'). |
| Experiment Setup | Yes | For each dataset, we train a graph neural network model to perform the node or graph classification task. Each model is a three-layer GNN with a hidden size of 20, followed by an MLP that maps these embeddings to the number of classes.For each experiment, we conduct 10 times with random parameter initialization and report the average results as well as the standard deviation.Next, we increase the dimensionality of hidden representation in the GNN model from 20 to 80. |