Kernel Adaptive Metropolis-Hastings

Authors: Dino Sejdinovic, Heiko Strathmann, Maria Lomeli Garcia, Christophe Andrieu, Arthur Gretton

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Kernel Adaptive Metropolis Hastings outperforms competing fixed and adaptive samplers on multivariate, highly nonlinear target distributions, arising in both real-world and synthetic examples. We provide experimental comparisons with other fixed and adaptive samplers in Section 5, where we show superior performance in the context of Pseudo-Marginal MCMC for Bayesian classification, and on synthetic target distributions with highly nonlinear shape.
Researcher Affiliation Academia Gatsby Unit, CSML, University College London, UK and School of Mathematics, University of Bristol, UK
Pseudocode Yes MCMC Kameleon Input: unnormalized target π, subsample size n, scaling parameters ν, γ, adaptation probabilities {pt} t=0, kernel k, At iteration t + 1, 1. With probability pt, update a random subsample z = {zi}min(n,t) i=1 of the chain history {xi}t 1 i=0, 2. Sample proposed point x from qz( |xt) = N(xt, γ2I + ν2Mz,xt HM z,xt), where Mz,xtis given in Eq. (3) and H = I 1 n1n n is the centering matrix, 3. Accept/Reject with the Metropolis-Hastings acceptance probability A(xt, x ) in Eq. (4), ( x , w.p. A(xt, x ), xt, w.p. 1 A(xt, x ).
Open Source Code Yes Python implementation of MCMC Kameleon is available at https://github.com/karlnapf/ kameleon-mcmc.
Open Datasets Yes We consider the UCI Glass dataset (Bache & Lichman, 2013)
Dataset Splits No No specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning was provided.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running experiments were mentioned.
Software Dependencies No The paper mentions 'Python implementation' but does not provide specific version numbers for Python or any other software libraries or dependencies used in the experiments.
Experiment Setup Yes (γ was fixed to 0.2), and which also stops adapting the proposal after the burn-in of the chain (in all experiments, we use a random subsample z of size n = 1000, and a Gaussian kernel with bandwidth selected according to the median heuristic). Each of these algorithms was run for 100,000 iterations (with a 20,000 burnin) and every 20th sample was kept.