Neural signature kernels as infinite-width-depth-limits of controlled ResNets

Authors: Nicola Muca Cirone, Maud Lemercier, Cristopher Salvi

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we first illustrate theoretical results established in Section 4 and then outline numerical considerations to scale the computation of signature kernels. To this aim, we consider a homogeneous Res Net ΦM,N φ with activation function φ = Re LU, and (σa, σA, σb) = (0.5, 1., 1.2). For R = 250 realizations of the weights and biases, we run the model on a 2-dimensional path x : t 7 (sin(15t), cos(30t) + 3et) observed at 100 regularly spaced time points in [0, 1]. We then verify that, as N increases, ΦM,N φ (x) converges to a Gaussian random variable with mean zero and variance Kφ(x, x).
Researcher Affiliation Academia 1Department of Mathematics, Imperial College London, London, United Kingdom 2Department of Mathematics, University of Oxford, Oxford, United Kingdom.
Pseudocode Yes Algorithm 1 SM,N 1 as Nestor program (in Appendix B.1.1) and Algorithm 2 SM,N 1 as Nestor program (in Appendix C.1.1).
Open Source Code Yes All the experiments presented in this paper are reproducible following the code at https://github.com/ Muca Cirone/Neural Signature Kernels
Open Datasets No The paper generates its own data for numerical validation, such as 'a 2-dimensional path x : t 7 (sin(15t), cos(30t) + 3et)' and 'two sample paths from a zero-mean GP with RBF kernel', without providing a specific link, DOI, or formal citation to a pre-existing public dataset.
Dataset Splits No The paper describes how it runs models on generated paths and estimates errors, but it does not specify any dataset splits like 'training', 'validation', or 'test' percentages or counts.
Hardware Specification No The paper mentions 'GPU computations' and 'maximum number of threads in a GPU block' but does not specify any exact GPU models (e.g., NVIDIA A100, RTX 2080 Ti), CPU models, or detailed cloud/cluster resource specifications.
Software Dependencies No The paper mentions 'dedicated python packages such as torchcde' but does not provide specific version numbers for these software components or any other libraries used for replication.
Experiment Setup Yes To this aim, we consider a homogeneous Res Net ΦM,N φ with activation function φ = Re LU, and (σa, σA, σb) = (0.5, 1., 1.2).