Differentiable Likelihoods for Fast Inversion of ’Likelihood-Free’ Dynamical Systems

Authors: Hans Kersting, Nicholas Krämer, Martin Schiegg, Christian Daniel, Michael Tiemann, Philipp Hennig

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that these methods outperform standard likelihood-free approaches on three benchmark-systems.
Researcher Affiliation Collaboration 1University of T ubingen, T ubingen, Germany 2Max Planck Institute for Intelligent Systems, T ubingen, Germany 3Bosch Center for Artificial Intelligence, Renningen, Germany.
Pseudocode Yes Algorithm 1 Gradient-based sampling/optimization
Open Source Code No The paper does not provide an explicit statement or link to the open-source code for the described methodology.
Open Datasets No All datasets are, as in eq. (3), generated by adding Gaussian noise to the solution xθ for some true parameter θ . ... To generate data by eq. (3), we added Gaussian noise with variance σ2 = 0.01 to the corresponding solution at time points [0.5, 1, 1.5, 2, 2.5, 3., 3.5, 4., 4.5].
Dataset Splits No The paper does not explicitly describe training, validation, or test dataset splits.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup Yes The optimizers and samplers were initialized at θ0 = [0.8, 0.2, 0.05, 1.1], and the forward solutions for all likelihood evaluations were computed with step size h = 0.05. In order to turn this θ0 into a useful initialization for the Markov chains, we accepted the first 45 states generated by PHMC and PLMC... For all optimizers, we picked the best the step size and, for all samplers, the best proposal width within the interval [10 16, 100] which is wide enough to contain all plausible values.