Truncated proposals for scalable and hassle-free simulation-based inference

Authors: Michael Deistler, Pedro J. Goncalves, Jakob H Macke

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that TSNPE performs on par with previous methods on established benchmark tasks. We then apply TSNPE to two challenging problems from neuroscience and show that TSNPE can successfully obtain the posterior distributions, whereas previous methods fail. Overall, our results demonstrate that TSNPE is an efficient, accurate, and robust inference method that can scale to challenging scientific models.
Researcher Affiliation Academia Michael Deistler University of Tübingen michael.deistler@uni-tuebingen.de Pedro J Gonçalves University of Tübingen pedro.goncalves@uni-tuebingen.de Jakob H Macke University of Tübingen Max Planck Institute for Intelligent Systems jakob.macke@uni-tuebingen.de
Pseudocode Yes Algorithm 1: TSNPE Inputs: prior p( ), observation xo, simulations per round N, number of rounds R, that defines the highest-probability region (HPR ) Outputs: Approximate posterior qφ. Initialize: Proposal p( ) = p( ), dataset X = {} for r 2 [1, ..., R] do for i 2 [1, ..., N] do i p( ) simulate xi p(x| i) add ( i, xi) to X φ = arg minφ 1 ( i,xi)2X log qφ( i|xi) Compute expected coverage( p( ), qφ) ; // see Alg. 2 p( ) / p( ) 1 2HPR ; // see Alg. 3
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]
Open Datasets Yes We compared TSNPE with NPE and APT on six benchmark tasks for which samples from the ground-truth posterior are available (see Appendix Sec. 6.9 for tasks) [Lueckmann et al., 2021]. ... We identify the posterior distribution given experimentally observed data [Haddad and Marder, 2021] (Fig. 5a) with APT and TSNPE (13 rounds, 30k simulations per round).
Dataset Splits No The paper discusses simulation budgets and rounds of training but does not provide explicit details on train/validation/test splits with percentages or sample counts for dataset partitioning.
Hardware Specification No The paper does not provide specific details on the hardware used for running experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions PyTorch and the sbi toolkit but does not specify the version numbers for these or other software dependencies used in the experiments.
Experiment Setup Yes We used 13 rounds for the pyloric network and 6 rounds for the L5PC model. For both tasks, we used a single round for NPE. We used a simulation budget of 30k simulations per round and trained the density estimator for 20 epochs for pyloric network and 50 epochs for L5PC.