Black-Box Variational Inference for Stochastic Differential Equations

Authors: Tom Ryder, Andrew Golightly, A. Stephen McGough, Dennis Prangle

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate the method on a Lotka Volterra system and an epidemic model, producing accurate parameter estimates in a few hours. We implement our method for two examples: (1) analysing synthetic data from a Lotka-Volterra SDE; (2) analysing real data from an SDE model of a susceptible-infectiousremoved (SIR) epidemic. Our experiments include challenging regimes such as: (A) low-variance observations; (B) conditioned diffusions with non-linear dynamics; (C) unobserved time series; (D) widely spaced observation times; (E) data which is highly unlikely under the unconditioned model. In all our experiments below similar tuning choices worked well. We use batch size n = 50 in (22). Our RNN cell has four hidden layers each with 20 hidden units and rectified-linear activation. We implement our algorithms in TensorFlow using the Adam optimiser (Kingma & Ba, 2015) and report results using an 8-core CPU.
Researcher Affiliation Academia 1School of Mathematics, Statistics and Physics, Newcastle University, Newcastle, UK 2School of Computing, Newcastle University, Newcastle, UK.
Pseudocode Yes Algorithm 1 Black-box variational inference for SDEs
Open Source Code Yes The code is available at https://github.com/Tom-Ryder/VIforSDEs.
Open Datasets Yes Our data is taken from an outbreak of influenza at a boys boarding school in 1978 (Jackson et al., 2013).
Dataset Splits No The paper does not explicitly provide training, validation, and test dataset splits needed to reproduce the experiment. It discusses batch size and generating synthetic data, but not how data was partitioned for training, validation, and testing.
Hardware Specification Yes We implement our algorithms in TensorFlow using the Adam optimiser (Kingma & Ba, 2015) and report results using an 8-core CPU.
Software Dependencies No The paper mentions using TensorFlow and the Adam optimizer, but it does not specify version numbers for these software components.
Experiment Setup Yes We use batch size n = 50 in (22). Our RNN cell has four hidden layers each with 20 hidden units and rectified-linear activation. We implement our algorithms in TensorFlow using the Adam optimiser (Kingma & Ba, 2015).