Variational Boosting: Iteratively Refining Posterior Approximations

Authors: Andrew C. Miller, Nicholas J. Foti, Ryan P. Adams

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply variational boosting to synthetic and real statistical models, and show that the resulting posterior inferences compare favorably to existing variational algorithms.
Researcher Affiliation Collaboration 1Harvard University, Cambridge, MA, USA 2University of Washington, Seattle, WA, USA 3Google Brain, Cambridge, MA, USA.
Pseudocode Yes This initialization procedure is detailed in Section A and Algorithm 1 of the supplement.
Open Source Code Yes Code available at https://github.com/andymiller/vboost.
Open Datasets Yes Binomial Regression: The model describes the binomial rates of success (batting averages) of baseball players using a hierarchical model (Efron & Morris, 1975)... Multi-level Poisson GLM: We use VBoost to approximate the posterior of a hierarchical Poisson GLM, a common non-conjugate Bayesian model. Here, we focus on a specific model that was formulated to measure the relative rates of stop-and-frisk events for different ethnicities and in different precincts (Gelman et al., 2007)... Bayesian Neural Network: We report held-out predictive performance for different approximate posteriors for six datasets.
Dataset Splits No The paper states 'First, we create a random partition into a 90% training set and 10% testing set.' for the Bayesian Neural Network experiment, but does not explicitly mention a validation set split.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running its experiments, only general statements like 'trade computation for improved accuracy'.
Software Dependencies No The paper mentions software components like 'adam' (Kingma & Ba, 2014) and 'autograd' (Maclaurin et al., 2015b;a) but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes For each round of VBoost, we estimate λ,ρL(C+1) using 400 samples each for q C+1 and q C. We use 1,000 iterations of adam with default parameters to update ρC+1 and λC+1 (Kingma & Ba, 2014)... We use a single 50-unit hidden layer, with Re LU activation functions... We allow each additional component only 200 iterations. To save time on initialization, we draw 100 samples from the existing approximation, and initialize the new component with the sample with maximum weight.