Parameterized Explainer for Graph Neural Network
Authors: Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, Xiang Zhang
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification over the leading baseline. |
| Researcher Affiliation | Collaboration | Dongsheng Luo1 Wei Cheng2 Dongkuan Xu1 Wenchao Yu2 Bo Zong2 Haifeng Chen2 Xiang Zhang1 1The Pennsylvania State University 2NEC Labs America 1{dul262,dux19,xzz89}@psu.edu 2{weicheng,wyu,bzong,haifeng}@nec-labs.com |
| Pseudocode | Yes | Detailed algorithms can be found in the Appendix. |
| Open Source Code | Yes | The code and data used in this work are available 2. [Footnote 2: https://github.com/flyingdoog/PGExplainer] |
| Open Datasets | Yes | We follow the setting in GNNExplainer and construct four kinds of node classification datasets, BA-Shapes, BA-Community, Tree-Cycles, and Tree-Grids [53]. Furthermore, we also construct a graph classification datasets, BA-2motifs... We also include a real-life dataset, MUTAG, for graph classification, which is also used in previous work [53]. |
| Dataset Splits | No | The paper states 'We follow the experimental settings in GNNExplainer [53]' but does not explicitly provide the specific percentages or counts for train/validation/test splits for all datasets within the main text. |
| Hardware Specification | No | No explicit hardware specifications (e.g., specific GPU or CPU models) used for running the experiments were provided in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) were provided in the paper. |
| Experiment Setup | No | The paper states 'We refer readers to the Appendix for more training details' and mentions tuning temperature τ, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed optimizer settings within the main text. |