Variational Bayesian Monte Carlo

Authors: Luigi Acerbi

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate VBMC both on several synthetic likelihoods and on a neuronal model with data from real neurons. Across all tested problems and dimensions (up to D = 10), VBMC performs consistently well in reconstructing the posterior and the model evidence with a limited budget of likelihood evaluations, unlike other methods that work only in very low dimensions.
Researcher Affiliation Academia Luigi Acerbi Department of Basic Neuroscience University of Geneva luigi.acerbi@unige.ch
Pseudocode Yes Algorithm 1 Variational Bayesian Monte Carlo Input: target log joint f, starting point x0, plausible bounds PLB, PUB, additional options
Open Source Code Yes Code available at https://github.com/lacerbi/vbmc.
Open Datasets Yes We consider a computational model of neuronal orientation selectivity in visual cortex [14]. We fit the neural recordings of one V1 and one V2 cell with the authors neuronal model that combines effects of filtering, suppression, and response nonlinearity [14].
Dataset Splits No The paper describes using synthetic and real neuronal data, but does not specify explicit training/validation/test dataset splits with percentages or sample counts.
Hardware Specification No No specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running experiments are provided in the paper.
Software Dependencies No No specific software dependencies with version numbers (e.g., library or solver names with versions) are explicitly mentioned in the paper.
Experiment Setup Yes We set βLCB = 3 unless specified otherwise... At the beginning of each iteration after the first, VBMC actively samples nactive points (nactive = 5 by default in this work)... In each iteration, we collect ngp = round(80/n) samples... We set a maximum number of components Kmax = n2/3... For VBMC we set wmin = 0.01 and ε = 0.01... We define the current variational solution as improving if the ELCBO of the last iteration is higher than the ELCBO in the past few iterations (nrecent = 4)... The algorithm terminates when obtaining a stable solution for nstable = 8 iterations (with at most one non-stable iteration in-between).