Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Microcanonical Hamiltonian Monte Carlo
Authors: Jakob Robnik, G. Bruno De Luca, Eva Silverstein, Uroลก Seljak
JMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test our method on various benchmark problems in Section 3. The code with a tutorial is publicly available. Section 3 is titled "Experiments" and includes quantitative comparisons of MCHMC and MCLMC against NUTS HMC on several benchmark problems (Ill-conditioned Gaussian, Bi-modal distribution, Rosenbrock function, Neal's funnel, German credit, Stochastic Volatility, Cauchy distribution) using metrics like ESS. |
| Researcher Affiliation | Academia | Jakob Robnik: Physics Department University of California at Berkeley, G. Bruno De Luca: Stanford Institute for Theoretical Physics Stanford University, Eva Silverstein: Stanford Institute for Theoretical Physics Stanford University, Uroษs Seljak: Physics Department University of California at Berkeley and Lawrence Berkeley National Laboratory. All listed affiliations are academic institutions or associated national laboratories. |
| Pseudocode | Yes | Algorithm 1: MCHMC q = 0 algorithm. Algorithm 2: MCLMC q = 0 algorithm. |
| Open Source Code | Yes | The code with a tutorial is publicly available1. 1. https://github.com/JakobRobnik/MicroCanonicalHMC |
| Open Datasets | Yes | 3.5 German credit: This is a popular Bayesian regression test case (Dua and Graff, 2017). We use the model implementation from the Inference Gym (Sountsov et al., 2020) and initialize the sampler by a draw from a standard Gaussian, centered at the MAP solution. |
| Dataset Splits | No | The paper primarily focuses on evaluating the sampling efficiency and accuracy of MCHMC and MCLMC on various target distributions (many of which are synthetic or toy problems), rather than training/testing machine learning models with explicit dataset splits. While some real-world datasets like "German credit" and "S&P500 index" are used, specific train/validation/test splits are not provided; the evaluation is focused on posterior quality or sample properties. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running its experiments. It focuses on the algorithmic details and performance metrics without mentioning specific CPU, GPU, or memory specifications. |
| Software Dependencies | No | The paper mentions the use of "NUTS (...) as implemented in the Num Pyro library (Phan et al., 2019)" and references "Blackjax: Library of samplers for jax (Lao and Louf, 2022)". However, specific version numbers for Num Pyro, Blackjax, or JAX are not provided. |
| Experiment Setup | Yes | We develop a fast tuning algorithm for bounce frequency (bounce strength for MCLMC) and the integration step-size. We do a short run with a few hundred steps and ฬฬฬ0 = 0.5 to determine Var[E] and update the stepsize to ฬฬฬ = ฬฬฬ0(0.0005 d/Var[E])1/4. We repeat this step a few times for convergence. ... For the 50D ICG example, we run 500 parallel chains, each initialized from a standard Gaussian. The integration stepsize is ฬฬฬ = 0.5. |