Neural Laplace: Learning diverse classes of differential equations in the Laplace domain

Authors: Samuel I Holt, Zhaozhi Qian, Mihaela van der Schaar

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the experiments, Neural Laplace shows superior performance in modelling and extrapolating the trajectories of diverse classes of DEs, including the ones with complex history dependency and abrupt changes. We evaluate Neural Laplace on a broad range of dynamical systems arising from engineering and natural sciences. These systems are governed by different classes of DEs. We show that Neural Laplace is able to model and predict these systems better than the ODE based methods.
Researcher Affiliation Academia 1Department of Applied Mathematics and Theoretical Physics, University of Cambridge, UK.
Pseudocode Yes Algorithm 1 Neural Laplace Training Procedure
Open Source Code Yes We have released a Py Torch (Paszke et al., 2017) implementation of Neural Laplace, including GPU implementations of several ILT algorithms. The code for this is at https://github.com/samholt/Neural Laplace.
Open Datasets No To sample the delay DE systems, we use a delay differential equation solver of Zulko (2019), to sample the Spiral DDE, Lotka-Volterra DDE, and Mackey Glass DDE data sets. ... For sampling the Integro DE, we use the analytical general solution... We similarly sampled the ODE with piecewise forcing function using its analytical general solution...
Dataset Splits Yes We divide the trajectories into a train-validation-test split of 80 : 10 : 10, for training, hyperparameter tuning, and evaluation respectively.
Hardware Specification Yes We trained and took these readings on a Intel Xeon CPU @ 2.30GHz, 64GB RAM with a Nvidia Tesla V100 GPU 16GB.
Software Dependencies No We have released a Py Torch (Paszke et al., 2017) implementation of Neural Laplace, including GPU implementations of several ILT algorithms.
Experiment Setup Yes We use the Adam optimizer (Kingma & Ba, 2017) with a learning rate of 1e-3, and batch size of 128. When training we use early stopping using the validation data set with a patience of 100, training for 1,000 epochs unless otherwise stated.