Graph Neural Network Explanations are Fragile
Authors: Jiate Li, Meng Pang, Yun Dong, Jinyuan Jia, Binghui Wang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We systematically evaluate our attacks on multiple graph datasets, GNN tasks, and diverse types of GNN explainers. Our experimental results show that existing GNN explainers are fragile. For instance, when perturbing only 2 edges, the explanatory edges can be 70% different from those without the attack. |
| Researcher Affiliation | Academia | 1Nanchang University, China 2Illinois Institute of Technology, USA 3Milwaukee School of Engineering, USA 4The Pennsylvania State University, USA |
| Pseudocode | Yes | Algorithm 1 Loss-based Attack Input:... (Appendix D) Algorithm 2 Deduction-based Attack Input:... (Appendix D) |
| Open Source Code | Yes | 1Code is at: https://github.com/JetRichardLee/Attack-XGNN |
| Open Datasets | Yes | For node classification, following existing works (Ying et al., 2019; Luo et al., 2020) we choose three synthetic datasets, i.e., BA House, BA Community, and Tree Cycle. We also add one large realworld dataset OGBN-Products (Bhatia et al., 2016). For graph classification, we use two real-world datasets, MUTAG (Kriege & Mutzel, 2012) and Reddit-Binary (Yanardag & Vishwanathan, 2015). |
| Dataset Splits | No | The paper mentions selecting 'testing nodes/graphs' for evaluation and reporting results 'on a set of testing nodes and graphs', but it does not provide specific train/validation/test split percentages, absolute sample counts for each split, or detailed splitting methodology. |
| Hardware Specification | Yes | Table 5 shows the average runtime on the 6 datasets in the default setting without any GPU (Macbook with a 2.30GHz CPU and an 8GB RAM). |
| Software Dependencies | No | The paper mentions using 'GCN (Kipf & Welling, 2017)' as the base GNN model and specific GNN explainers (GNNExplainer, PGExplainer, GSAT), but does not provide version numbers for any software libraries or dependencies (e.g., Python, PyTorch, TensorFlow, specific GNN frameworks). |
| Experiment Setup | Yes | Table 8 in Appendix C summarizes the default values of the key parameters in the explainers and our attack, e.g., perturbation budget ξ, top-k selection parameter k. When studying the impact of each parameter, we fix the others to be the default value. ...Table 8. Parameter Setting. Task Dataset #Cases |E|avg k ξ N β γ... |