RegExplainer: Generating Explanations for Graph Neural Networks in Regression Tasks

Authors: Jiaxing Zhang, Zhuomin Chen, hao mei, Longchao Da, Dongsheng Luo, Hua Wei

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our proposed method on three benchmark datasets and a real-life dataset introduced by us, and extensive experiments demonstrate its effectiveness in interpreting GNN models in regression tasks.
Researcher Affiliation Academia 1New Jersey Institute of Technology, 2Florida International University, 3Arizona State University
Pseudocode Yes Algorithm 1 Graph Mix-up Algorithm
Open Source Code Yes Our data and code are available at: https://github.com/jz48/Reg Explainer
Open Datasets Yes We formulate Three synthetic datasets and a real-world dataset, as is shown in Table 1, in order to address the lack of graph regression datasets with ground-truth explanation. The datasets include: BA-Motif-Volume and BA-Motif-Counting, which are based on BA-shapes [23], Triangles [39], and Crippen [40].
Dataset Splits Yes We split the dataset into 8:1:1, where we train the GNN base model with 8 folds, and train and test explainer models with 1 fold respectively.
Hardware Specification Yes All experiments are conducted on a Linux machine (Ubuntu 16.04.4 LTS (GNU/Linux 4.4.0-210generic x86_64)) with 4 NVIDIA TITAN Xp (12 GB) GPUs.
Software Dependencies Yes All codes are written with the Python version 3.8.13 with Py Torch 1.12.1 and Py Torch Geometric (Py G) 2.1.0.post1, torch-scatter 2.0.9, and torch-sparse 0.6.15.
Experiment Setup Yes Additionally, we set all variants with the same configurations as original Reg Explainer, including learning rate, training epochs, and hyperparameters η, α, and β.