What Does My GNN Really Capture? On Exploring Internal GNN Representations
Authors: Luca Veyrin-Forrer, Ataollah Kamal, Stefan Duffner, Marc Plantevit, Céline Robardet
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate DISCERN through several experiments. We first describe the datasets and the experimental setup. Then, we compare our method against several instance-level and model-level baselines. |
| Researcher Affiliation | Academia | 1Univ Lyon, INSA Lyon, CNRS, UCBL, LIRIS, UMR5205, FR-69621 Villeurbanne 2EPITA Research and Development Laboratory (LRDE), FR-94276 Le Kremlin-Bicˆetre |
| Pseudocode | No | The paper describes algorithmic steps and processes in natural language within paragraphs, but does not present a formal pseudocode block or an explicitly labeled "Algorithm" section. |
| Open Source Code | Yes | The code source of the method is made available: https://github.com/luvf/inside-gnn. |
| Open Datasets | Yes | Experiments are performed on three graph classification datasets (Aids [Morris et al., 2020], BBBP [Wu et al., 2017], Mutagen [Morris et al., 2020]) depicting molecules and important properties in Chemistry or Drug Discovery (class). |
| Dataset Splits | No | The paper mentions using three datasets (Aids, BBBP, Mutagen) and training a GNN on each, but does not specify explicit training, validation, and test dataset split percentages, counts, or methodologies (e.g., k-fold cross-validation). |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments. |
| Software Dependencies | No | The paper mentions various algorithms and models such as GCN, MCTS, and baselines like GNNExplainer, but it does not provide specific version numbers for any software dependencies, libraries, or programming languages used in the implementation. |
| Experiment Setup | Yes | A 3-convolutional layer GNN (with K = 20) is trained on each dataset. We mine 10 activation rules per layer and for each class. |