On the Generalization and Approximation Capacities of Neural Controlled Differential Equations

Authors: Linus Bleistein, Agathe Guilloux

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our theoretical results are validated through a series of experiments. Our theoretical results are illustrated by experiments on synthetic data.
Researcher Affiliation Academia Linus Bleistein Inria Paris, UEVE Agathe Guilloux Inria Paris
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at this link.
Open Datasets No The paper describes generating synthetic data using 'Python package stochastic' and 'f BM with Hurst parameter H'. It does not provide a specific link, DOI, repository name, or formal citation for a pre-existing, publicly available dataset.
Dataset Splits No The paper mentions training sample sizes and test sample sizes but does not specify a validation split or its size: 'The size of the training sample is set to n = 100.' and 'the expected generalization error is computed on 50 test samples.'
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for experiments.
Software Dependencies No The paper mentions 'Python package stochastic' and 'Pytorch s default initialization' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The model is initialized with Pytorch s default initialization. In the first and second figures (starting from the left), the model is trained for 2000 iterations with Adam. We use the default values for α, β and a learning rate of 5 × 10−3. The size of the training sample is set to n = 100. We train a shallow NCDE classifier with p = 3 on n = 100 time series sampled at 100 equidistant time points in [0, 1] for 100 iterations with Binary Cross Entropy (BCE) loss. We use Adam with default settings and a learning rate of 5 × 10−2.