Linear-Time Gibbs Sampling in Piecewise Graphical Models

Authors: Hadi Afshar, Scott Sanner, Ehsan Abbasnejad

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we show that the mixing time of the proposed method, augmented Gibbs sampling, is faster than Rejection sampling, baseline Gibbs and MH. Algorithms are tested against the BPPL model of Example 1 and a Market maker (MM) model motivated by (Das and Magdon-Ismail 2008). For each combination of the parameter space dimensionality D and the number of observed data n, we generate data points from each model and simulate the associated expected value of ground truth posterior distribution by running rejection sampling on a 4 core, 3.40GHz PC for 15 to 30 minutes. Subsequently, using each algorithm, particles are generated and based on them, average absolute error between samples and the ground truth, ||E[θ] θ ||1, is computed. The time till the absolute error reaches the threshold error 3.0 is recorded. For each algorithm, three independent Markov chains are executed and the results are averaged. The whole process is repeated 15 times and the results are averaged and standard errors are computed. We observe that in both models, the behavior of each algorithm has a particular pattern (Figure 5).
Researcher Affiliation Collaboration Hadi Mohasel Afshar ANU & NICTA Canberra, Australia hadi.afshar@anu.edu.au Scott Sanner NICTA & ANU Canberra, Australia ssanner@nicta.com.au Ehsan Abbasnejad ANU & NICTA Canberra, Australia ehsan.abbasnejad@anu.edu.au
Pseudocode No The paper describes algorithms and methods in prose but does not include any formal pseudocode blocks or clearly labeled algorithm figures.
Open Source Code No The paper does not provide any statement about releasing open-source code or a link to a code repository.
Open Datasets No The paper states: "For each combination of the parameter space dimensionality D and the number of observed data n, we generate data points from each model and simulate the associated expected value of ground truth posterior distribution..." It does not use or provide access to a pre-existing public dataset.
Dataset Splits No The paper describes generating data and evaluating the sampling algorithms based on error and mixing time, but it does not specify traditional train/validation/test dataset splits. The problem is framed as Bayesian inference where the focus is on sampling from a posterior distribution, not supervised learning with predefined data partitions.
Hardware Specification Yes For each combination of the parameter space dimensionality D and the number of observed data n, we generate data points from each model and simulate the associated expected value of ground truth posterior distribution by running rejection sampling on a 4 core, 3.40GHz PC for 15 to 30 minutes.
Software Dependencies No The paper mentions tuning MH, but it does not list any specific software or library names with version numbers that were used for implementation or experimentation.
Experiment Setup Yes Models are configured as follows: In BPPL, η = 0.4 and prior is uniform in a hypercube. In MM, L = 0, H = 20, ϵ = 2.5 and δ = 10. For each algorithm, three independent Markov chains are executed and the results are averaged. The whole process is repeated 15 times and the results are averaged and standard errors are computed. We carefully tuned MH to reach the optimal acceptance rate of 0.234 (Roberts, Gelman, and Gilks 1997).