Model evidence from nonequilibrium simulations
Authors: Michael Habeck
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We studied the performance of the evidence estimators on forward/backward simulations of the Gaussian toy model. Our first test system is a 32 × 32 Ising model for which the log evidence can be computed exactly... We studied the performance of the nonequilibrium marginal likelihood estimators on various challenging probabilistic models including Markov random fields and Gaussian mixture models. |
| Researcher Affiliation | Academia | Michael Habeck Statistical Inverse Problems in Biophysics, Max Planck Institute for Biophysical Chemistry & Institute for Mathematical Stochastics, University of Göttingen, 37077 Göttingen, Germany email mhabeck@gwdg.de |
| Pseudocode | Yes | Algorithm 1 Bennett’s acceptance ratio (BAR) |
| Open Source Code | Yes | A python package implementing the work simulations and evidence estimators can be downloaded from https://github.com/michaelhabeck/paths. |
| Open Datasets | Yes | We ran tests on an RBM with 784 visible and 500 hidden units trained on the MNIST handwritten digits dataset [24]... [24] Y. Le Cun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278 2324, 1998. |
| Dataset Splits | No | The paper mentions a 'training set' for the RBM, but it does not specify any dataset splits for training, validation, or testing, nor does it refer to predefined standard splits with details. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., CPU, GPU models, memory, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper mentions a 'python package' for implementation but does not specify version numbers for Python itself or any other software libraries or dependencies used in the experiments. |
| Experiment Setup | Yes | We generated M = 1000 forward and reverse paths using a linear inverse temperature schedule that interpolates between β0 = 0 and βK = 1 where K = 1000... The single spin-flip transitions are repeated N times at constant βk, i.e. N is the number of equilibration steps... trained on the MNIST handwritten digits dataset [24] with contrastive divergence using 25 steps [25]. |