MCMC Variational Inference via Uncorrected Hamiltonian Annealing

Authors: Tomas Geffner, Justin Domke

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section presents results using UHA for Bayesian inference problems on several models of varying dimensionality and for VAE training. We compare against Hamiltonian AIS, IW, HVI and HVAE.
Researcher Affiliation Academia Tomas Geffner College of Information and Computer Science University of Massachusetts, Amherst Amherst, MA tgeffner@cs.umass.edu Justin Domke College of Information and Computer Science University of Massachusetts, Amherst Amherst, MA domke@cs.umass.edu
Pseudocode Yes Algorithm 1 Corrected Tm(zm+1, ρm+1|zm, ρm)
Open Source Code Yes 3. If you ran experiments...(a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]
Open Datasets Yes We consider four models: Brownian motion (d = 32)... Convection Lorenz bridge (d = 90)... Logistic regression with the a1a (d = 120) and madelon (d = 500) datasets. The first two obtained from the Inference gym [36]. ... We use three datasets: mnist [25] (numbers 1-9), emnist-letters [11] (letters A-Z), and kmnist [10] (cursive Kuzushiji).
Dataset Splits Yes In all cases we use stochastic binarization [33] and a training set of 50000 samples, a validation set of 10000 samples, and a test set of 10000 samples.
Hardware Specification No 3. If you ran experiments...(d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No]
Software Dependencies No We implement all algorithms using Jax [5]. (A version number for Jax or any other library/dependency is not specified.)
Experiment Setup Yes optimize the objective using Adam [23] with a step-size of 0.001 for 5000 steps. For UHA we tune the initial approximation q(z), the integrator s step-size ϵ and the damping coefficient η. ... tune the parameters of each method by running Adam for 5000 steps. We repeat all simulations for different step-sizes in {10 3, 10 4, 10 5}, and select the best one for each method. ... We consider η {0.5, 0.9, 0.99} and three values of ϵ that correspond to three different rejection rates: 0.05, 0.25 and 0.5.