Graph Neural Tangent Kernel: Convergence on Large Graphs
Authors: Sanjukta Krishnagopal, Luana Ruiz
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | These results are verified empirically on node regression and classification tasks. Lastly, we verify our theoretical results in three numerical applications: prediction of opinion dynamics on random graphs, movie recommendation using the Movie Lens dataset, and node classification on the Cora, Cite Seer and Pub Med networks. We observe the convergence of the GNTK (Sec. 6.1), the effect of width in kernel regression and GNN training (Sec. 6.2), and the convergence of the GNTK eigenvalues on sequences of graphs (Sec. 6.3). |
| Researcher Affiliation | Academia | 1Dept. of Electrical Engineering and Computer Science 2Dept. of Mathematics, UCLA 3Work done in part while visiting the Simons Institute for the Theory of Computing. 4MIT CSAIL. Correspondence to: Sanjukta Krishnagopal <sanjukta@berkeley.edu>, Luana Ruiz <ruizl@mit.edu>. |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The code can be found in this repository. |
| Open Datasets | Yes | Movie Lens dataset (Harper & Konstan, 2016)... Cora, Cite Seer and Pub Med networks... from the full distribution (Bojchevski & G unnemann, 2017) available in Py Torch Geometric. |
| Dataset Splits | Yes | We fix the training set size to 300 samples, and use 30 samples for both validation and testing. |
| Hardware Specification | Yes | All experiments were run on a NVIDIA RTX A6000 GPU. |
| Software Dependencies | No | The paper mentions using Py Torch Geometric but does not specify its version number or any other software dependencies with version numbers. |
| Experiment Setup | Yes | In all experiments, the GNNs have L = 1 layer (3) with Re LU nonlinearity followed by a perceptron layer. For opinion dynamics, K = 2, and for movie recommendation, K = 5. ... We train the three GNNs by minimizing the MSE loss over 20 epochs and with batch size 32, using ADAM (Kingma & Ba, 2015) with learning rate 1e 3 and weight decay 5e 3. |