Mixed Hamiltonian Monte Carlo for Mixed Discrete and Continuous Variables

Authors: Guangyao Zhou

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The superior performances of M-HMC over existing methods are demonstrated with numerical experiments on Gaussian mixture models (GMMs), variable selection in Bayesian logistic regression (BLR), and correlated topic models (CTMs).
Researcher Affiliation Industry Guangyao Zhou Vicarious AI Union City, CA 94587, USA stannis@vicarious.com
Pseudocode Yes Algorithm 1 M-HMC with Laplace momentum
Open Source Code Yes Code available at https://github.com/Stannis Zhou/mixed_hmc
Open Datasets Yes We use the Associated Press (AP) dataset [15], which consists of 2246 documents.
Dataset Splits No The paper specifies burn-in and actual sample counts for MCMC chains but does not provide explicit training, validation, or test dataset splits in the traditional machine learning sense for model training.
Hardware Specification No The paper mentions that implementations rely on JAX but does not specify any particular CPU or GPU models, or other hardware details used for running experiments.
Software Dependencies No The paper mentions software like JAX, NUMBA, pypolyagamma, Numpyro, and arviz, but it does not provide specific version numbers for these dependencies.
Experiment Setup Yes For each sampler, we draw 104 burn-in and 104 actual samples in 192 independent chains. For each sampler, we use 192 independent chains, each with 1000 burn-in and 2000 actual samples. For M-HMC, we inspect short trial runs on a separate document, and fix T, n D for all 20 picked documents and set L = 80 Nd for document d.