Synthesis of MCMC and Belief Propagation

Authors: Sung-Soo Ahn, Michael Chertkov, Jinwoo Shin

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we report experimental results for computing partition function of the Ising model and the hard-core model. We compare Algorithm 2 in Section 3 (coined MCMC-BP-2reg) and Algorithm 3 in Section 4.2 (coined MCMC-BP-whole), with the bare Bethe approximation (coined BP) and the popular Gibbs-sampler (coined MCMC-Gibbs). To make the comparison fair, we use the same annealing scheme for all MCMC schemes, thus making their running times comparable. More specifically, we generate each sample after running T1 = 1, 000 iterations of an MC and take s1 = 100 samples to compute each estimation (e.g., Hi) at intermediate steps. For performance measure, we use the log-partition function approximation error defined as | log Z log Zapprox|/| log Z|, where Zapprox is the output of the respective algorithm. We conducted 3 experiments on the 4 4 grid graph. As seen clearly in Figure 1, BP and MCMC-Gibbs are outperformed by MCMC-BP-2reg or MCMC-BP-whole at most tested regimes in the first experiment with no external field, where in this case, the 2-regular loop series (LS) is equal to the full one.
Researcher Affiliation Academia School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Korea 1 Theoretical Division, T-4 & Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87545, USA, 2Skolkovo Institute of Science and Technology, 143026 Moscow, Russia
Pseudocode Yes Algorithm 1 Sampling 2-regular loops; Algorithm 2 Approximation for Z2-Loop; Algorithm 3 Approximation for ZLoop
Open Source Code No The paper does not provide an explicit statement about the availability of its source code, nor does it include a link to a repository.
Open Datasets No The paper describes generating instances of the Ising model and hard-core model on a 4x4 grid graph for experiments, but does not provide access information (link, DOI, specific citation) for these generated datasets or a general public source for the specific data used in their experiments.
Dataset Splits No The paper discusses the number of samples and iterations for its MCMC algorithms but does not provide specific training, validation, or test dataset splits.
Hardware Specification No The paper describes experiments conducted on a 4x4 grid graph but does not specify any hardware details such as CPU, GPU, or memory used for these experiments.
Software Dependencies No The paper describes algorithms and methods but does not list any specific software dependencies or their version numbers (e.g., programming languages, libraries, frameworks).
Experiment Setup Yes To make the comparison fair, we use the same annealing scheme for all MCMC schemes, thus making their running times comparable. More specifically, we generate each sample after running T1 = 1, 000 iterations of an MC and take s1 = 100 samples to compute each estimation (e.g., Hi) at intermediate steps.