From Exact to Anytime Solutions for Marginal MAP

Authors: Junkyu Lee, Radu Marinescu, Rina Dechter, Alexander Ihler

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate empirically the newly introduced anytime AND/OR search algorithms and compare them with the exact algorithms on problem instances generated from the PASCAL2 Inference Challenge benchmarks (Elidan, Globerson, and Heinemann 2012). Through extensive empirical evaluations, we show that the proposed anytime algorithms dominate exact search algorithms and provide anytime solutions to the problems that were beyond the reach of exact algorithms.
Researcher Affiliation Collaboration Junkyu Lee University of California, Irvine Irvine, CA 92697, USA junkyul@ics.uci.edu Radu Marinescu IBM Research Ireland radu.marinescu@ie.ibm.com Rina Dechter University of California, Irvine Irvine, CA 92697, USA dechter@ics.uci.edu Alexander Ihler University Irvine, CA 92697, USA ihler@ics.uci.edu
Pseudocode Yes Algorithm 1: AOBF-MMAP; Algorithm 2: RBFAOO-MMAP; Algorithm 3: WAOBF-MMAP; Algorithm 4: WRAOBF-MMAP; Algorithm 5: BRAOBB-MMAP
Open Source Code No The paper does not provide any concrete access to source code (e.g., repository link, explicit code release statement, or code in supplementary materials) for the methodology described.
Open Datasets Yes All competing algorithms use the static weighted mini-bucket heuristic with moment matching, WMB-MM(i), whose strength can be controlled by a parameter i-bound (Dechter and Rish 2003; Liu and Ihler 2011). Our benchmark is a subset of PASCAL2 benchmark on 3 domains: grid with 15 networks; pedigree and promedas with 10 networks. PASCAL2 Inference Challenge benchmarks (Elidan, Globerson, and Heinemann 2012).
Dataset Splits No The paper describes how problem instances were generated ('For each network, we generated 5 MMAP problem instances where half of the variables are MAP variables: 1 easy instance... and 4 hard instances...') but does not specify explicit training, validation, or test dataset splits.
Hardware Specification No The paper states 'time bounds up to 2 hours with 24 GB memory' and 'For the 5 best-first algorithms, the size of the cache table was 4 GB, whereas for the 2 depthfirst branch and bound algorithms the size of the cache table was only limited by the total memory limit of 24 GB', which refer to memory limits. However, it does not provide specific details on CPU, GPU, or other hardware components used for running the experiments.
Software Dependencies No The paper mentions 'WMB-MM(i)' as a heuristic used, but it does not provide specific software names with version numbers for any libraries, frameworks, or environments used in the experiments.
Experiment Setup Yes All competing algorithms use the static weighted mini-bucket heuristic with moment matching, WMB-MM(i), whose strength can be controlled by a parameter i-bound (Dechter and Rish 2003; Liu and Ihler 2011). The starting weight was 64 for WAOBF, WRAOBF, and WRBFAOO, and the overestimation parameter was 1 for RBFAOO and WRBFAOO. For the 5 best-first algorithms, the size of the cache table was 4 GB, whereas for the 2 depthfirst branch and bound algorithms the size of the cache table was only limited by the total memory limit of 24 GB for a fair comparison.