Reinforcement Learning Enhanced Explainer for Graph Neural Networks
Authors: Caihua Shan, Yifei Shen, Yao Zhang, Xiang Li, Dongsheng Li
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both synthetic and real datasets show that RG-Explainer outperforms state-of-the-art GNN explainers. |
| Researcher Affiliation | Collaboration | 1Microsoft Research Asia {caihua.shan,dongsheng.li}@microsoft.com 2The Hong Kong University of Science and Technology yshenaw@connect.ust.hk 3Fudan University yaozhang@fudan.edu.cn 4East China Normal University xiangli@dase.ecnu.edu.cn |
| Pseudocode | Yes | Due to the space limitation, we move the pseudocode, implementation details and ablation study to the supplementary materials. |
| Open Source Code | Yes | We also attach our codes in the supplementary materials. |
| Open Datasets | Yes | We use six datasets, in which four synthetic datasets (BA-shapes, BA-Community, Tree Cycles and Tree-Grid) are used for the node classification task and two datasets (BA-2motifs and Mutagenicity) are used for the graph classificition task. |
| Dataset Splits | No | Specifically, we vary the training set sizes from {10%, 30%, 50%, 70%, 90%} and take the remaining instances for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment. |
| Experiment Setup | No | For fairness, we follow the experimental setup in [17, 12], i.e., the same datasets, trained GNN model and evaluation metrics. Besides, we also utilize the same fine-tuned parameters in [12] for our competitors, GNNExplainer and PGExplainer. |