KerGNNs: Interpretable Graph Neural Networks with Graph Kernels

Authors: Aosong Feng, Chenyu You, Shiqiang Wang, Leandros Tassiulas6614-6622

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments We evaluate the proposed model on graph classification task and node classification task (discussed in Appendix), and we also show the model interpretability by visualizing the graph filters in the trained models as well as the output graphs. The graph classification results are shown in Table 1, with the best results highlighted in bold.
Researcher Affiliation Collaboration Aosong Feng1, Chenyu You1, Shiqiang Wang2, Leandros Tassiulas1 1 Yale University, New Haven, CT, USA 2 IBM T. J. Watson Research Center, Yorktown Heights, NY, USA
Pseudocode Yes Algorithm 1: Forward pass in l-th Ker GNN layer
Open Source Code No The paper mentions using 'official implementations' for some baseline models and 'Gra Ke L library' for others, but does not provide a link or explicit statement for the open-source code of their proposed Ker GNNs.
Open Datasets Yes We evaluate our proposed Ker GNN model on 8 publicly available graph classification datasets. Specifically, we use DD (Dobson and Doig 2003), PROTEINS (Borgwardt et al. 2005), NCI1 (Schomburg et al. 2004), ENZYMES (Schomburg et al. 2004) for binary and multi-class classification of biological and chemical compounds, and we also use the social datasets IMDB-BINARY, IMDBMULTI, REDDIT-BINARY, and COLLAB (Yanardag and Vishwanathan 2015).
Dataset Splits Yes To make a fair comparison with state-of-the-art GNNs, we follow the cross-validation procedure described in Errica et al. (2019). We use a 10-fold cross-validation for model assessment and an inner holdout technique with a 90%/10% training/validation split for model selection, following the same dataset index splits as Errica et al. (2019).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using 'Adam optimizer' and 'Gra Ke L library (Siglidis et al. 2020)' but does not provide specific version numbers for these software dependencies or other key libraries like PyTorch/TensorFlow/Python versions.
Experiment Setup Yes We use Adam optimizer with an initial learning rate of 0.01 and decay the learning rate by half in every 50 epochs. The hyper-parameters that we tune for each dataset include the learning rate, the dropout rate, the number of layers of Ker GNNs and MLP, the number of graph filters at each layer, the number of nodes in each graph filter, the number of nodes for each subgraph, and the hidden dimension of each Ker GNN layer. For the random walk kernel, we also tune the length of random walks.