Improving Gibbs Sampler Scan Quality with DoGS

Authors: Ioannis Mitliagkas, Lester Mackey

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments with joint image segmentation and object recognition, Markov chain Monte Carlo maximum likelihood estimation, and Ising model inference, Do GS consistently deliver higher-quality inferences with significantly smaller sampling budgets than standard Gibbs samplers.
Researcher Affiliation Collaboration 1Department of Computer Science, Stanford University, Stanford, CA 94305 USA 2Microsoft Research New England, One Memorial Drive, Cambridge, MA 02142 USA.
Pseudocode Yes Algorithm 1 Gibbs sampling (Geman & Geman, 1984) and Algorithm 2 Do GS: Scan selection via coordinate descent
Open Source Code No The paper does not provide any explicit links or statements about open-sourcing the code.
Open Datasets Yes Using the Microsoft Research Cambridge (MSRC) pixel-wise labeled image database v12
Dataset Splits No The paper mentions '90% training / 10% test partitions' but does not specify a separate validation split or explicit proportions for training and testing.
Hardware Specification Yes Each Gibbs step took 12.65µs on a 2015 Macbook Pro.
Software Dependencies No The paper does not specify any software names with version numbers.
Experiment Setup Yes For all experiments with binary MRFs, we adopt the model parameterization of (3) (with no additional temperature parameter) and use Theorem 11 to produce the Dobrushin influence bound C. ... We target a single marginal X1 with d = e1 and take a systematic scan of length T = 2 106 as our input scan. ... We set the number of gradient steps, MC steps per gradient, and independent runs of Gibbs sampling to the suggested values in (Domke, 2015).