Gradient-free Hamiltonian Monte Carlo with Efficient Kernel Exponential Families

Authors: Heiko Strathmann, Dino Sejdinovic, Samuel Livingstone, Zoltan Szabo, Arthur Gretton

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We support our claims with experimental studies on both toy and real-world applications, including Approximate Bayesian Computation and exact-approximate MCMC.
Researcher Affiliation Academia Gatsby Unit University College London +Department of Statistics University of Oxford o School of Mathematics University of Bristol
Pseudocode Yes Algorithm 1 Kernel Hamiltonian Monte Carlo Pseudo-code
Open Source Code Yes All code can be found at https://github.com/karlnapf/kernel_hmc
Open Datasets Yes We next apply KMC to sample from the marginal posterior over hyper-parameters of a Gaussian Process Classification (GPC) model on the UCI Glass dataset [24].
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits. It describes tuning parameters but does not specify a validation set for data partitioning.
Hardware Specification No The paper mentions that "All samplers took 1h time", but does not specify any hardware details like GPU/CPU models or memory.
Software Dependencies No The paper does not provide specific version numbers for ancillary software dependencies.
Experiment Setup Yes We tuned the scaling of KAMH and RW to achieve 23% acceptance. We set HMC parameters to achieve 80% acceptance and then used the same parameters for KMC. We ran all samplers for 2000+200 iterations from a random start point, discarded the burn-in and computed acceptance rates, the norm of the empirical mean ˆE[x] , and the minimum effective sample size (ESS) across dimensions. For KMC we pick randomly between 1 and 10 leapfrog steps, chosen uniformly from [0.01, 0.1], a standard Gaussian momentum, and a kernel tuned by cross-validation