Closed-Form Gibbs Sampling for Graphical Models with Algebraic Constraints

Authors: Hadi Mohasel Afshar, Scott Sanner, Christfried Webers

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate the proposed sampler converges at least an order of magnitude faster than existing Monte Carlo samplers.In this section, we are interested in (a) comparing the efficiency and accuracy of our proposed closed-form Gibbs against other MCMC methods on models with observed constraints as well as (b) studying the performance of the proposed collapsing mechanism (dimension reduction) vs. the practice of relaxing such constraints with noise (as often suggested in probabilistic programming toolkits).
Researcher Affiliation Academia Hadi Mohasel Afshar Research School of Computer Science Australian National University Canberra, ACT 0200, Australia hadi.afshar@anu.edu.au Scott Sanner School of EE & Computer Science Oregon State University Corvallis, OR 973331, USA scott.sanner@oregonstate.edu Christfried Webers National ICT Australia (NICTA) Canberra, ACT 2601, Australia christfried.webers @nicta.com.au
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets No The paper defines parameters for its experimental models (e.g., uniform priors for masses and velocities) rather than using external, publicly available datasets.
Dataset Splits No The paper does not specify training, validation, or test dataset splits; it focuses on sampling from defined probabilistic models.
Hardware Specification Yes All algorithms run on a 4 core, 3.40GHz PC.
Software Dependencies Yes Stan probabilistic programming language (Stan Development Team 2014) which in the references states "Stan Development Team. 2014. Stan Modeling Language Users Guide and Reference Manual, Version 2.5.0."
Experiment Setup Yes To soften the determinism, the observation of a deterministic variable Z is approximated by observation of a newly introduced variable with a Gaussian prior centered at Z and with noise variance (parameter) σ2 Z. [...] The used parameters are summarized in Table 1.MH is automatically tuned after (Roberts et al. 1997) by testing 200 equidistant proposal variances in interval (0, 0.1] and accepting a variance for which the acceptance rate closer to 0.24.