Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

On the Inductive Bias of Neural Tangent Kernels

Authors: Alberto Bietti, Julien Mairal

NeurIPS 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments. We now study numerically the stability of (exact) kernel mapping representations for convolutional networks with 2 hidden convolutional layers. We consider both a convolutional kernel network (CKN, [11]) with arc-cosine kernels of degree 1 on patches (corresponding to the kernel obtained when only training the last layer and keeping previous layers fixed) and the corresponding NTK. Figure 1 shows the resulting average distances, when considering collections of digits and deformations thereof.
Researcher Affiliation Academia Alberto Bietti Inria EMAIL Julien Mairal Inria EMAIL Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France
Pseudocode No The paper contains mathematical derivations and descriptions of methods, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about making its source code available, nor does it provide a link to a code repository.
Open Datasets Yes Numerical experiments... on digit images and their deformations from the Infinite MNIST dataset [32].
Dataset Splits No The paper mentions using the 'Infinite MNIST dataset' but does not provide specific details on how the dataset was split into training, validation, or test sets (e.g., percentages, sample counts, or a description of the splitting methodology).
Hardware Specification No The paper describes theoretical analyses and numerical experiments but does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to conduct these experiments.
Software Dependencies No The paper does not list any specific software dependencies, libraries, or tools with their version numbers that would be necessary for replicating the experiments.
Experiment Setup Yes Numerical experiments. We now study numerically the stability of (exact) kernel mapping representations for convolutional networks with 2 hidden convolutional layers. We consider both a convolutional kernel network (CKN, [11]) with arc-cosine kernels of degree 1 on patches (corresponding to the kernel obtained when only training the last layer and keeping previous layers fixed) and the corresponding NTK. See Appendix D for more details on the experimental setup.