Graph Convolutional Kernel Machine versus Graph Convolutional Networks

Authors: Zhihao Wu, Zhao Zhang, Jicong Fan

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The numerical results on benchmark datasets show that, besides the aforementioned advantages, GCKMs have at least competitive accuracy compared to GCNs.
Researcher Affiliation Academia 1Shenzhen Research Institute of Big Data, Shenzhen, China 2Hefei University of Technology, Hefei, China 3The Chinese University of Hong Kong, Shenzhen, China
Pseudocode No The paper describes the model's formulation using mathematical equations but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes 1The source code is available at https://github.com/Zhihao Wu99/GCKM.
Open Datasets Yes We employ three most widely adopted citation networks Cora, Citeseer, and Pubmed for evaluations, they are formed as unweighted and undirected graphs where each node represents a paper and edges denote citations between papers. As for graph classification, IMDB-BINARY and IMDB-MULTI are movie collaboration datasets; COLLAB is a scientific collaboration dataset; MUTAG, PROTEINS, and PTC are three bioinformatics datasets.
Dataset Splits Yes In node classification, following vanilla GCN [Kipf and Welling, 2017], the nodes are split into three set: train set containing 20 samples per class, validation set and test set with 500 and 1, 000 samples respectively, and the standard fixed split is same to [Yang et al., 2016].
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library names and their exact versions) required to reproduce the experiments.
Experiment Setup Yes We use a 2-layer GCKM for the experiments and the kernel is specified as Gaussian kernel, more detailed settings of GCKM can be found in Appendix F.1. ... We set the hidden dimensions as 32 for all hidden layers of GCN, APPNP, and JKNet, while SGC only has a learnable matrix W Rn c where n is the number of nodes and c is the number of classes, and the learning rate is selected in {1 10 2, 1 10 3, 1 10 4} and weight decay is selected in {5 10 4, 5 10 5, 5 10 6}.