Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels

Authors: Simon S. Du, Kangcheng Hou, Russ R. Salakhutdinov, Barnabas Poczos, Ruosong Wang, Keyulu Xu

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we test GNTKs on graph classification datasets and show they achieve strong performance.
Researcher Affiliation Academia Simon S. Du Institute for Advanced Study ssdu@ias.edu Kangcheng Hou Zhejiang University kangchenghou@gmail.com Barnabás Póczos Carnegie Mellon University bapoczos@cs.cmu.edu Ruslan Salakhutdinov Carnegie Mellon University rsalakhu@cs.cmu.edu Ruosong Wang Carnegie Mellon University. ruosongw@andrew.cmu.edu Keyulu Xu Massachusetts Institute of Technology keyulu@mit.edu
Pseudocode No The paper provides mathematical formulas and derivations for GNTK calculations, but it does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about open-sourcing the code or a link to a code repository for the described methodology.
Open Datasets Yes Datasets. The benchmark datasets include four bioinformatics datasets MUTAG, PTC, NCI1, PROTEINS and three social network datasets COLLAB, IMDB-BINARY, IMDB-MULTI.
Dataset Splits Yes Following common practices of evaluating performance of graph classification models Yanardag and Vishwanathan [2015], we perform 10-fold cross validation and report the mean and standard deviation of validation accuracies.
Hardware Specification Yes On IMDB-B dataset, running GIN with the default setup (official implementation of Xu et al. [2019a]) takes 19 minutes on a TITAN X GPU and running GNTK only takes 2 minutes.
Software Dependencies No The paper mentions 'official implementation of Xu et al. [2019a]' but does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Following common practices of evaluating performance of graph classification models Yanardag and Vishwanathan [2015], we perform 10-fold cross validation and report the mean and standard deviation of validation accuracies. More details about the experiment setup can be found in Section B of the supplementary material. For IMDBBINARY, we vary the number of BLOCK operations in {2, 3, 4, 5, 6}. For NCI1, we vary the number of BLOCK operations in {8, 10, 12, 14, 16}. For both datasets, we vary the number of MLP layers in {1, 2, 3}.