Nonparametric Hamiltonian Monte Carlo
Authors: Carol Mak, Fabian Zaiser, Luke Ong
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This paper introduces the Nonparametric Hamiltonian Monte Carlo (NP-HMC) algorithm which generalises HMC to nonparametric models. We provide a correctness proof of NP-HMC, and empirically demonstrate significant performance improvements over existing approaches on several nonparametric examples. |
| Researcher Affiliation | Academia | Carol Mak 1 Fabian Zaiser 1 Luke Ong 1 1Department of Computer Science, University of Oxford, United Kingdom. Correspondence to: Carol Mak <pui.mak@cs.ox.ac.uk>. |
| Pseudocode | Yes | Figure 4. Pseudocode for Nonparametric Hamiltonian Monte Carlo |
| Open Source Code | Yes | The code for our implementation and experiments is available at https://github.com/fzaiser/nonparametric-hmc and archived as (Zaiser & Mak, 2021). |
| Open Datasets | Yes | We used this model to generate N = 200 training data points for a fixed θ = (K = 9,µ 1...K ). |
| Dataset Splits | Yes | We used this model to generate N = 200 training data points for a fixed θ = (K = 9,µ 1...K ). We computed the log pointwise predictive density (LPPD) for a test set with N = 50 data points Y = {y1,...,y N }, generated from the same θ as the training data. |
| Hardware Specification | No | No specific hardware details (e.g., CPU/GPU models, memory) used for running experiments are provided in the paper. |
| Software Dependencies | No | We implemented the NP-HMC algorithm and its variants (NP-RHMC and NP-DHMC) in Python, using PyTorch (Paszke et al., 2019) for automatic differentiation. While PyTorch is mentioned, a specific version number is not provided, making it not reproducible based on the strict definition. |
| Experiment Setup | Yes | Table 1. Total variation distance from the ground truth for the geometric distribution, averaged over 10 runs. Each run: 10^3 NP-DHMC samples with 10^2 burn-in, 5 leapfrog steps of size 0.1; and 5 x 10^3 LMH, PGibbs and RMH samples. |