Generative Causal Explanations for Graph Neural Networks
Authors: Wanyu Lin, Hao Lan, Baochun Li
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on synthetic and real-world datasets show that Gem achieves a relative increase of the explanation accuracy by up to 30% and speeds up the explanation process by up to 110 as compared to its state-of-the-art alternatives. |
| Researcher Affiliation | Academia | 1Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China 2Department of Electrical & Computer Engineering, University of Toronto, Toronto, Canada. |
| Pseudocode | Yes | Algorithm 1 Distillation Process: Distill the top-k most relevant edges for each computation graph |
| Open Source Code | Yes | The source code can be found in https://github.com/wanyulin/ICML2021-Gem. |
| Open Datasets | Yes | For graph classification, we use two benchmark datasets from bioinformatics Mutag (Debnath et al., 1991) and NCI1 (Wale et al., 2008). |
| Dataset Splits | Yes | Table 4 shows the detailed data splitting for model training, testing, and validation. Note that both classification models and our explanation models use the same data splitting. |
| Hardware Specification | Yes | All the experiments were performed on a NVIDIA GTX 1080 Ti GPU with an Intel Core i7-8700K processor. |
| Software Dependencies | No | Unless otherwise stated, all models, including GNN classification models and our explainer, are implemented using Py Torch 1 and trained with Adam optimizer. |
| Experiment Setup | Yes | Specifically, we first apply an inference model parameterized by a three-layer GCN with output dimensions 32, 32, and 16. Then the generative model is given by an inner product decoder between latent variables. The explainer models are trained with a learning rate of 0.01. We use mean square error as the loss for training Gem. In particular, it was optimized using Adam optimizer with a learning rate of 0.01 and 0.001 for explaining graph and node classification model, respectively. We train at batch size 32 for 100 epochs. |