Neural Controlled Differential Equations for Irregular Time Series

Authors: Patrick Kidger, James Morrill, James Foster, Terry Lyons

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that our model achieves state-of-the-art performance against similar (ODE or RNN based) models in empirical studies on a range of datasets. Finally we provide theoretical results demonstrating universal approximation, and that our model subsumes alternative ODE models.
Researcher Affiliation Academia Patrick Kidger James Morrill James Foster Terry Lyons Mathematical Institute, University of Oxford The Alan Turing Institute, British Library {kidger, morrill, foster, tlyons}@maths.ox.ac.uk
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/patrick-kidger/Neural CDE. We have also released a library torchcde, at https://github.com/patrick-kidger/torchcde
Open Datasets Yes We consider the Character Trajectories dataset from the UEA time series classification archive [31]. We use data from the Physio Net 2019 challenge on sepsis prediction [32, 33]. We used the Speech Commands dataset [34].
Dataset Splits Yes Appendix D.2 Character Trajectories: We use a 70%/10%/20% train/validation/test split. Appendix D.3 PhysioNet sepsis prediction: We use a 70%/10%/20% train/validation/test split for this data, as with Character Trajectories. Appendix D.4 Speech Commands: We use the recommended 80%/10%/10% train/validation/test splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running its experiments.
Software Dependencies No In our experiments, we were able to straightforwardly use the already-existing torchdiffeq package [24] without modification. The implementation via torchdiffeq is in Python. The paper mentions software packages like 'torchdiffeq' and 'Python' but does not specify their version numbers.
Experiment Setup Yes For every problem, the hyperparameters were chosen by performing a grid search to optimise the performance of the baseline ODE-RNN model. Equivalent hyperparameters were then used for every other model, adjusted slightly so that every model has a comparable number of parameters. Precise experimental details may be found in Appendix D, regarding normalisation, architectures, activation functions, optimisation, hyperparameters, regularisation, and so on.