Log Neural Controlled Differential Equations: The Lie Brackets Make A Difference

Authors: Benjamin Walker, Andrew Donald Mcleod, Tiexin Qin, Yichuan Cheng, Haoliang Li, Terry Lyons

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Log-NCDEs are shown to outperform NCDEs, NRDEs, the linear recurrent unit, S5, and MAMBA on a range of multivariate time series datasets with up to 50,000 observations.
Researcher Affiliation Academia 1Mathematical Institute, University of Oxford, UK 2Department of Electrical Engineering, City University of Hong Kong.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes 1https://github.com/Benjamin-Walker/ Log-Neural-CDEs
Open Datasets Yes We construct a toy dataset of 100,000 time series with 6 channels and 100 regularly spaced samples each.
Dataset Splits Yes Following Morrill et al. (2021), the original train and test cases are combined and resplit into new random train, validation, and test cases using a 70 : 15 : 15 split.
Hardware Specification Yes In order to compare the models, 1000 steps of training were run on an NVIDIA RTX 4090 with each model using the hyperparameters obtained from the hyperparameter optimisation.
Software Dependencies No The paper mentions 'Jax s vmap' and 'Adam' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes On the toy dataset, all models use a hidden state of dimension 64 and Adam with a learning rate of 0.0001 (Kingma & Ba, 2017). Full details on the hyperparameter grid search are in Appendix C.4.