Accelerating Hamiltonian Monte Carlo via Chebyshev Integration Time
Authors: Jun-Kun Wang, Andre Wibisono
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We now evaluate HMC with the proposed Chebyshev integration time (Algorithm 2) and HMC with the constant integration time (Algorithm 2 with line 7 replaced by the constant integration time (5)) in several tasks. Table 2 shows the results. HMC with Chebyshev integration time consistently outperforms that of using the constant integration time in terms of all the metrics: Mean ESS, Min ESS, Mean ESS/Sec, and Min ESS/Sec. |
| Researcher Affiliation | Academia | Jun-Kun Wang and Andre Wibisono Department of Computer Science, Yale University {jun-kun.wang,andre.wibisono}@yale.edu |
| Pseudocode | Yes | Algorithm 1: IDEAL HMC and Algorithm 2: HMC WITH CHEBYSHEV INTEGRATION TIME |
| Open Source Code | Yes | Our implementation of the experiments is done by modifying a publicly available code of HMCs by Brofos & Lederman (2021). Code for our experiments can be found in the supplementary. |
| Open Datasets | Yes | We consider three datasets: Heart, Breast Cancer, and Diabetes binary classification datasets, which are all publicly available online. |
| Dataset Splits | No | The paper describes experiments involving sampling from various probability distributions and analyzing the generated samples (e.g., '10,000 samples collected from a number of 10,000 HMC chains'). However, it does not define traditional training, validation, or test dataset splits in the context of supervised learning, as the experiments focus on evaluating sampling algorithms rather than training and evaluating a model on pre-split datasets. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models, or memory configurations. |
| Software Dependencies | No | The paper mentions using 'Numpy' and the 'ArViz' toolkit, and modifying code by 'Brofos & Lederman (2021)', but it does not provide specific version numbers for any of these software dependencies. |
| Experiment Setup | Yes | For all the tasks in the experiments, the total number of iterations of HMCs is set to be K = 10, 000, and hence we collect K = 10, 000 samples along the trajectory. For the step size θ in the leapfrog steps, we let θ {0.001, 0.005, 0.01, 0.05}. |