Fast Graph Neural Tangent Kernel via Kronecker Sketching
Authors: Shunhua Jiang, Yunze Man, Zhao Song, Zheng Yu, Danyang Zhuo7033-7041
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments to validate that the error introduced by matrix sketching is strictly bounded. Following Lemma 3 , we validate the error difference between matrix multiplication with and without the sketching method. Specifically, we randomly generate n n matrices A, G and H. And matrix multiplication without sketching is calculated by M = G AH. For the sketching method, we randomly generate two AMS matrices R and S with size γn n where γ is the sketching ratio. And matrix multiplication with sketching is calculated by Msketch = G R RAS SH. The experimental error matrix is calculated by |M Msketch|, and the theoretical error matrix is calculated by the RHS of Lemma 3. We divide both errors by the original matrix M to show the relative error. And we show the final mean error by taking the average over all entries of the error matrices. Fig. 3 shows the result. We set n = 500, and run experiments under different sketching rates from 0.1 to 0.9. We run each sketching rate for 100 times and calculate the mean error. We also show the matrix multiplication time with/without sketching. Experiments show that our sketching error is always lower than the theoretical bound. When the sketching rate is high, the error decreases and the running time increases because the dimension of the matrix is larger. This experiment validates our Lemma 3, showing that our matrix sketching method has a strictly bounded error. |
| Researcher Affiliation | Collaboration | 1Columbia University 2Carnegie Mellon University 3Adobe Research 4Princeton University 5Duke University |
| Pseudocode | No | The paper does not contain a clearly labeled "Pseudocode" or "Algorithm" block. The methods are described textually and mathematically. |
| Open Source Code | No | The paper does not include any explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | No | The paper states, "we randomly generate n n matrices A, G and H" for experimental validation. This is a synthetic data generation process, and no publicly available dataset is named or linked, nor is the generated data provided access to. |
| Dataset Splits | No | The paper does not specify any training/test/validation dataset splits. The experiments involve randomly generated matrices to validate error bounds. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory specifications) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) that would be needed to reproduce the experiment. |
| Experiment Setup | Yes | We set n = 500, and run experiments under different sketching rates from 0.1 to 0.9. We run each sketching rate for 100 times and calculate the mean error. |