Temporal Graph Neural Tangent Kernel with Graphon-Guaranteed

Authors: Katherine Tieu, Dongqi Fu, Yada Zhu, Hendrik Hamann, Jingrui He

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In addition to the theoretical analysis, we also perform extensive experiments, not only demonstrating the superiority of Temp-G3NTK in the temporal graph classification task, but also showing that Temp-G3NTK can achieve very competitive performance in node-level tasks like node classification compared with various SOTA graph kernel and representation learning baselines.
Researcher Affiliation Collaboration Katherine Tieu University of Illinois Urbana-Champaign kt42@illinois.edu; Dongqi Fu Meta AI dongqifu@meta.com; Yada Zhu IBM Research yzhu@us.ibm.com; Hendrik Hamann IBM Research hendrikh@us.ibm.com; Jingrui He University of Illinois Urbana-Champaign jingrui@illinois.edu
Pseudocode Yes The pseudo-code of computing the Temp-G3NTK kernel as above is shown in Appendix A.
Open Source Code Yes Our code is available at https://github.com/kthrn22/ Temp GNTK
Open Datasets Yes Targeting temporal graph classification, we conduct experiments on one of the most advanced temporal graph benchmarks that have graph-level labels, i.e., TUDataset 5 [32], the four datasets are INFECTIOUS, DBLP, FACEBOOK, and TUMBLR, the detailed dataset statistics can also be found in Appendix G.1. Additionally, we also leveraged the more large-scale temporal datasets REDDIT, WIKIPEDIA, LASTFM, and MOOC from [25]6.
Dataset Splits Yes For each dataset above, we evaluate the temporal graph classification accuracy by conducting 5-fold cross-validation and then report the mean and standard deviation of test accuracy. To be specific, given a dataset of n temporal graphs {G1, G2, ..., Gn} and their labels {y1, y2, ..., yn}... The training, validation, and test sets of tgbn-trade are defined in the TGB package with 70%/15%/15% chronological splits.
Hardware Specification No The paper provides runtime comparisons in Table 3 but does not specify the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No We adopt the implementations of Graph Kernels from GRAKEL library [43] and the implementations Graph Representation Learning methods from the Karate Club library [40]. We adopt the default hyperparameters from implementations of both libraries.
Experiment Setup Yes upon obtaining the time representation as in Eq. 1, we let the dimension of the time representation be dt = 25 and α = β = dt. In order to leverage Temp-G3NTK for graph classification, we employ C-SVM as a kernel regression predictor with the gram matrix of pairwise Temp-G3NTK values of the training set as the pre-computed kernel. The regularization parameter C of the SVM classifier is sampled evenly from 120 values in the interval [10 2, 104], in log scale, and set the number of maximum iterations to 5 105. For the number of BLOCK operations in our Temp-G3NTK formula, L, we search for L over {1, 2, 3}.