Bucket Renormalization for Approximate Inference

Authors: Sungsoo Ahn, Michael Chertkov, Adrian Weller, Jinwoo Shin

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform extensive experiments on synthetic (Ising models on complete and grid graphs) and real-world models from the UAI dataset. In our experiments, both MBR and GBR show performance superior to other stateof-the-art elimination and variational algorithms.
Researcher Affiliation Collaboration 1School of Electrical Engineering, KAIST, Daejeon, South Korea 2Theoretical Division, T-4 & Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87545, USA 3Skolkovo Institute of Science and Technology, 143026 Moscow, Russia 4University of Cambridge, UK 5The Alan Turing Institute, UK 6AITrics, Seoul, South Korea.
Pseudocode Yes Algorithm 1 Bucket Elimination (BE); Algorithm 2 Mini-bucket renormalization (MBR); Algorithm 3 GM renormalization; Algorithm 4 Global-Bucket Renormalization (GBR).
Open Source Code No The paper does not provide an unambiguous statement of code release or a direct link to the source code for the described methodology.
Open Datasets Yes We perform extensive experiments on synthetic (Ising models on complete and grid graphs) and real-world models from the UAI dataset. ... UAI 2014 Inference Competition (Gogate, 2014). ... Gogate, Vibhav. UAI 2014 Inference Competition. http://www.hlt.utdallas.edu/ vgogate/ uai14-competition/index.html, 2014.
Dataset Splits No The paper mentions using Ising models and UAI datasets, but it does not specify how these datasets were split into training, validation, or test sets, or reference predefined splits with citations.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or cloud computing specifications.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python version, library versions).
Experiment Setup Yes In our experiments, we draw φij and φi uniformly from intervals of [ , ] and [ 0.1, 0.1] respectively, where is a parameter controlling the interaction strength between variables. ... We vary the interaction strength and the induced width bound ibound (for mini-bucket algorithms and GBP), where ibound = 10 and = 1.0 are the default choices. ... For all mini-bucket algorithms, we unified the choice of elimination order for each instance of GM by applying min-fill heuristics.