GStarX: Explaining Graph Neural Networks with Structure-Aware Cooperative Games
Authors: Shichang Zhang, Yozen Liu, Neil Shah, Yizhou Sun
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that GStar X produces qualitatively more intuitive explanations, and quantitatively improves explanation fidelity over strong baselines on chemical graph property prediction and text graph sentiment classification. We conduct experiments on datasets from different domains including synthetic graphs, chemical graphs, and text graphs. Quantitative studies. We report averaged test set H-Fidelity in Table 1. Qualitative studies. We visualize the explanations of graphs in Graph SST2 in Figure 2 and compare them qualitatively. Ablation study and analysis. |
| Researcher Affiliation | Collaboration | Shichang Zhang1 Yozen Liu2 Neil Shah2 Yizhou Sun1 1University of California, Los Angeles 2Snap Inc. 1{shichang, yzsun}@cs.ucla.edu 2{yliu2, nshah}@snap.com |
| Pseudocode | Yes | Algorithm 1 GStar X: Graph Structure-Aware Explanation. Algorithm 2 The Compute-HN Function. |
| Open Source Code | Yes | Code available at https://github.com/ShichangZh/GStarX |
| Open Datasets | Yes | We conduct experiments on datasets from different domains including synthetic graphs, chemical graphs, and text graphs. MUTAG [8], BACE and BBBP [39] contain chemical molecule graphs. Graph SST2 and Twitter [43] contain graphs constructed from text. BA2Motifs [25] contains graphs with a Barabasi-Albert (BA) base graph. |
| Dataset Splits | Yes | We use a 10-fold cross-validation setup for all datasets. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, or cloud instance specifications). |
| Software Dependencies | No | The paper mentions software like PyTorch Geometric, BERT embeddings, and Biaffine parser, but does not provide specific version numbers for these or other dependencies required for reproducibility. |
| Experiment Setup | Yes | All models are trained to convergence with hyperparameters and performance shown in Appendix A.2. Our models were trained for 300 epochs for all datasets using Adam optimizer with an initial learning rate of 0.01 and weight decay of 0.0001. |