Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS

Authors: Lin Chen, Sheng Xu

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We verify the asymptotics of the Maclaurin coefficients of the Laplace kernel and NTKs through numerical results. Fig. 1 plots [zn]K(z)/n 3/2 versus n for different kernels, including the Laplace kernel KLap(u) = e−2(1−u) and NTKs N1, . . . , N4 with β = 0, 1. All curves converge to a constant as n → ∞, which indicates that for every kernel K(z) considered here, we have [zn]K(z) = Θ(n−3/2). The numerical results agree with our theory in the proofs of Theorem 8 and Theorem 1. Now we investigate the value of [zn]K(z)/n−3/2. Table 1 reports [z100]K(z)/100−3/2 for the Laplace kernel and NTKs with β = 0, 1. These numerical values are the final values of the curves in Fig. 1.
Researcher Affiliation Academia Lin Chen Simons Institute for the Theory of Computing University of California, Berkeley lin.chen@berkeley.edu Sheng Xu Department of Statistics and Data Science Yale University sheng.xu@yale.edu
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the described methodology.
Open Datasets No The paper mentions 'UCI dataset and other large scale datasets' in the context of prior work by Geifman et al. (2020), but it does not specify any publicly available datasets used for its own numerical results or provide access information for any datasets.
Dataset Splits No The paper does not specify any training, validation, or test dataset splits. The numerical results presented are for theoretical verification rather than model training.
Hardware Specification No The paper does not provide any specific details about the hardware used to run its numerical experiments.
Software Dependencies No The paper does not list any specific software dependencies with version numbers used for its numerical experiments.
Experiment Setup No The paper does not provide specific details about the experimental setup such as hyperparameters or system-level training settings, as its numerical results are for theoretical verification rather than machine learning model training.