Entropy-based adaptive Hamiltonian Monte Carlo

Authors: Marcel Hirt, Michalis Titsias, Petros Dellaportas

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evidence suggests that the adaptation method can outperform different versions of HMC schemes by adjusting the mass matrix to the geometry of the target distribution and by providing some control on the integration time. Numerical experiments. This section illustrates the mixing performance of the entropy-based sampler for a variety of target densities.
Researcher Affiliation Collaboration Marcel Hirt Department of Statistical Science University College London, UK marcel.hirt.16@ucl.ac.uk Michalis K. Titsias DeepMind London, UK mtitsias@google.com Petros Dellaportas Department of Statistical Science University College London, UK Department of Statistics Athens Univ. of Economics and Business, Greece and The Alan Turing Institute, UK
Pseudocode Yes Algorithm 1 Sample the next state q and adapt β, γ and θ.
Open Source Code Yes Our implementation2 builds up on tensorflow probability [39] with some target densities taken from [53]. 2https://github.com/marcelah/entropy_adaptive_hmc
Open Datasets Yes We considered six datasets (Australian Credit, Heart, Pima Indian, Ripley, German Credit and Caravan) that are commonly used for benchmarking inference methods, cf. [16].
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, or testing.
Hardware Specification No The authors acknowledge the use of the UCL Myriad High Performance Computing Facility (Myriad@UCL), and associated support services, in the completion of this work. However, this statement does not provide specific hardware details like GPU/CPU models, processor types, or memory amounts.
Software Dependencies No Our implementation2 builds up on tensorflow probability [39] with some target densities taken from [53]. While TensorFlow Probability is mentioned, no specific version number for this library or other software dependencies is provided, which is necessary for reproducible software details.
Experiment Setup Yes The adaptation scheme in Algorithm 1 requires to choose learning rates ρθ, ρβ, ργ and can be viewed within a stochastic approximation framework of controlled Markov chains... We have used Adam [37] with a constant step size to adapt the mass matrix, but have stopped the adaptation after some fixed steps... We adapt the sampler for 4 × 10^4 steps in case (i) and for 10^5 steps in case (ii). We used 10 parallel chains throughout our experiments to adapt the mass matrix.