Magnetic Hamiltonian Monte Carlo
Authors: Nilesh Tripuraneni, Mark Rowland, Zoubin Ghahramani, Richard Turner
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Here we investigate the performance of magnetic HMC against standard HMC in several examples; in each case commenting on our choice of the magnetic field term G. Step sizes (ϵ) and number of leapfrog steps (L) were tuned to achieve an acceptance rate between .7 .8, after which the norm of the non-zero elements in G was set to .1 .2 which was found to work well. |
| Researcher Affiliation | Collaboration | 1UC Berkeley, USA 2University of Cambridge, UK 3Uber AI Labs, USA. Correspondence to: Nilesh Tripuraneni <nileshtrip@gmail.com>. |
| Pseudocode | Yes | Algorithm 1 Magnetic HMC (MHMC) |
| Open Source Code | No | The paper does not provide any statement or link indicating that source code for the described methodology is openly available. |
| Open Datasets | No | The paper uses synthetic distributions ('Multiscale Gaussians', 'Mixture of Gaussians') and generates observations from a model ('Fitz Hugh-Nagumo model'), rather than using external, publicly available datasets with concrete access information. |
| Dataset Splits | No | The paper describes generating samples for evaluation and running parallel chains, but does not specify train/validation/test dataset splits as it is not a supervised learning task using pre-existing datasets. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as CPU/GPU models or memory specifications. |
| Software Dependencies | No | The paper does not mention any specific software dependencies with version numbers required for reproducibility. |
| Experiment Setup | Yes | Step sizes (ϵ) and number of leapfrog steps (L) were tuned to achieve an acceptance rate between .7 .8, after which the norm of the non-zero elements in G was set to .1 .2 which was found to work well. ... We tuned HMC to achieve an acceptance rate of .75 and used the same ϵ, L for MHMC... with settings of ϵ = 0.015, L = 10, which resulted in an average acceptance rate of .8. |