Towards Multi-Grained Explainability for Graph Neural Networks

Authors: Xiang Wang, Yingxin Wu, An Zhang, Xiangnan He, Tat-Seng Chua

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both synthetic and real-world datasets show the superiority of our explainer, in terms of AUC on explaining graph classification over the leading baselines.
Researcher Affiliation Academia Xiang Wang , Ying-Xin Wu , An Zhang , Xiangnan He , Tat-Seng Chua Sea-NEx T Joint Lab National University of Singapore University of Science and Technology of China xiangwang@u.nus.edu, wuyxin@mail.ustc.edu.cn, an_zhang@nus.edu.sg xiangnanhe@gmail.com, dcscts@nus.edu.sg
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our codes and datasets are available at https://github.com/Wuyxin/Re Fine.
Open Datasets Yes Molecule graph classification. We use the Mutagenicity dataset [40, 41]... Scene graph classification. Following the previous work [10], we select 4, 443 (images, scene graphs) pairs from Visual Genome [43]... Handwriting graph classification. We use the MNIST superpixel dataset [45]... Motif graph classification. We follow prior studies [6, 7] to create a synthetic dataset, BA-3motif...
Dataset Splits No The paper mentions 'testing accuracy' and 'testing dataset' for evaluation, but does not explicitly provide training/validation/test dataset splits or cross-validation details for their own model.
Hardware Specification Yes All experiments are done on a single Tesla V100 SXM2 GPU (32 GB).
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For our Re Fine framework, we use the Adam optimizer and set the learning rate of pre-training and fine-tuning as 1e-3 and 1e-4, respectively.