Variational Bayesian Monte Carlo with Noisy Likelihoods

Authors: Luigi Acerbi

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We tested different versions of VBMC and other surrogate-based inference algorithms on a novel benchmark problem set consisting of a variety of computational models applied to real data (see Section 4.1). For each problem, the goal of inference is to approximate the posterior distribution and the log marginal likelihood (LML) with a fixed budget of likelihood evaluations. For each algorithm, we performed 100 runs per problem with random starting points, and we evaluated performance with several metrics (see Section 4.2).
Researcher Affiliation Academia Luigi Acerbi Department of Computer Science University of Helsinki luigi.acerbi@helsinki.fi Previous affiliation: Department of Basic Neuroscience, University of Geneva.
Pseudocode No The paper describes the VBMC algorithm steps but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes The new features have been implemented in VBMC: https://github.com/lacerbi/vbmc.
Open Datasets Yes We fit choices and reaction times of two subjects (S1 and S2) from [43] using IBS with Nrep = 500... We fit timing responses (discretized with δtm = 0.02 s) of a representative subject from [45] using IBS with Nrep = 500... We fit datasets from two subjects (S1 and S2) from [47] using IBS with Nrep = 200 repeats... We fit the neural recordings of one V1 and one V2 cell from [48]... We fit 104 trials from a representative subject dataset [50] using IBS with Nrep = 500...
Dataset Splits No The paper does not explicitly provide details about training, validation, and test dataset splits. It mentions a
Hardware Specification No This work has utilized the NYU IT High Performance Computing resources and services. This is a general statement about computing resources, but lacks specific details such as GPU or CPU models, or memory specifications. For algorithmic overhead, it mentions
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python version, library versions like PyTorch or TensorFlow versions, or specific solver versions).
Experiment Setup Yes For each problem, we allow a budget of 50 (D+2) likelihood evaluations. For each algorithm, we use the same default settings across problems... In the synthetic likelihood (SL) approach, we run Nsim simulations for each evaluation... With inverse binomial sampling (IBS), we obtain unbiased estimates of the log-likelihood... we repeat the process Nrep times and average the estimates for higher precision... We estimate the log-likelihood via the log-SL approach using the same 13 summary statistics as in [3, 4, 10, 25], with Nsim = 100 simulations per evaluation... using IBS with Nrep = 500.