Self-Interpretable Graph Learning with Sufficient and Necessary Explanations
Authors: Jiale Deng, Yanyan Shen
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on various GNNs and real-world graphs show that SUNNY-GNN yields accurate predictions and faithful explanations, outperforming the state-of-the-art methods by improving 3.5% prediction accuracy and 13.1% explainability fidelity on average. |
| Researcher Affiliation | Academia | Jiale Deng, Yanyan Shen* Department of Computer Science and Engineering, Shanghai Jiao Tong University {jialedeng, shenyy}@sjtu.edu.cn |
| Pseudocode | No | Due to the space limit, we provide the detailed algorithm in the supplementary material. |
| Open Source Code | Yes | Our code and data are available at https://github.com/SJTU-Quant/SUNNY-GNN. |
| Open Datasets | Yes | We select three widely-used benchmark datasets in citation networks: Citeseer, Cora, and Pubmed (Yang, Cohen, and Salakhudinov 2016)... Amazon (Shchur et al. 2018)... Coauthor-CS and Coauthor-Physics (Shchur et al. 2018)... For further tasks in heterogeneous scenarios, we use IMDB, DBLP and ACM datasets. Their detailed statistics can be found in previous works (Wang et al. 2021a; Lv et al. 2021). |
| Dataset Splits | No | The paper mentions '|Vtrain|' as the number of labeled nodes in the training set for each dataset in Table 1, but does not provide explicit training, validation, and test dataset splits (e.g., percentages or specific counts for all splits) nor references standard split methodologies with enough detail for reproduction. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | We use Py Torch to implement SUNNY-GNN. |
| Experiment Setup | Yes | For SUNNY-GNN, we set coefficient of contrastive loss γ =0.01 and the temperature hyperparameter τ = 0.1. All the experiments are conducted 5 times with different random seeds and average results with standard deviations are reported. |