Implicit Probabilistic Integrators for ODEs

Authors: Onur Teymur, Han Cheng Lie, Tim Sullivan, Ben Calderhead

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We give an illustrative example highlighting the effect of the use of probabilistic integrators including our new method in the setting of parameter inference within an inverse problem.
Researcher Affiliation Academia Onur Teymur & Ben Calderhead Department of Mathematics Imperial College London; Han Cheng Lie & T.J. Sullivan Institute of Mathematics, Freie Universit at Berlin; & Zuse Institut Berlin
Pseudocode Yes Pseudo-code for this algorithm is given in the supplementary material.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No We first generate synthetic data Y ; 20 two-dimensional data-points collected at times t Y = 1, 2, . . . , 20 corrupted by centred Gaussian noise with variance σ = (0.01) I2. The paper uses synthetic data and does not provide access information for a publicly available dataset.
Dataset Splits No The paper describes generating synthetic data and running MCMC, but does not specify explicit training, validation, or test dataset splits or percentages.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No The paper mentions algorithms used (e.g., 'Adaptive Metropolis Hastings', 'pre-conditioned Crank Nicolson algorithm') but does not specify any software libraries or dependencies with version numbers.
Experiment Setup Yes We first generate synthetic data Y ; 20 two-dimensional data-points collected at times t Y = 1, 2, . . . , 20 corrupted by centred Gaussian noise with variance σ = (0.01) I2. ... Each represents 1000 parameter samples from simulations run with step-sizes h = 0.005, 0.01, 0.02, 0.05. This is made of 11000 total samples, with the first 1000 discarded as burn-in, and the remainder thinned by a factor of 10. ... h = 0.1 and AM0 = 0.2.