Anytime Best+Depth-First Search for Bounding Marginal MAP

Authors: Radu Marinescu, Junkyu Lee, Alexander Ihler, Rina Dechter

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare against two of the best current state-of-the-art schemes and show that our best+depth search scheme produces higher quality solutions faster while also producing a bound on their accuracy, which can be used to measure solution quality during search. An extensive empirical evaluation demonstrates the effectiveness of our new methods which enjoy the strength of best-first (optimality of search) and of depth-first (memory robustness), leading to solutions for difficult instances where previous solvers were unable to find even a single solution.
Researcher Affiliation Collaboration Radu Marinescu IBM Research Ireland tradu.marinescu@ie.ibm.com Junkyu Lee, Alexander Ihler, Rina Dechter University of California, Irvine Irvine, CA 92697, USA {junkyul,ihler,dechter}@ics.uci.edu
Pseudocode Yes Algorithm 1 Expanding a node by generating its successors; Algorithm 2 Updating the node qand l-values; Algorithm 3 LAOBF; Algorithm 4 AAOBF; Algorithm 5 Ln DFS
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the methodology described.
Open Datasets Yes We compare empirically the anytime performance of our proposed best+depth-first search algorithms (LAOBF, AAOBF, and Ln DFS) with the best-first (WRBFAOO) and the depthfirst search algorithm (BRAOBB) on benchmark problems generated from the PASCAL2 Inference Challenge (Elidan, Globerson, and Heinemann 2012).
Dataset Splits No The paper describes generating problem instances and evaluating anytime performance but does not specify explicit train/validation/test dataset splits with percentages or sample counts.
Hardware Specification No The algorithms were allotted up to 1 hour time limit within 24 GB of memory, where all but BRAOBB had an additional 4 GB memory limit on the size of the cache table that primarily stores the exact solutions of conditional likelihoods under summation variables. This specifies memory limits but no specific CPU, GPU, or other hardware models.
Software Dependencies No The paper mentions using "WMB-MM(i)" as a heuristic function and refers to various algorithms by name (e.g., LAOBF, AAOBF, Ln DFS, WRBFAOO, BRAOBB, AFSE) but does not provide specific version numbers for any software components or libraries.
Experiment Setup Yes Four of the algorithms require tuning parameters to optimize the anytime performance, as follows: the cutoff parameter θ of LAOBF that triggers the depth-first lookahead was set to 1000 (this value was determined experimentally), the subproblem rotation parameter of BRAOBB was set to 1000, and the initial weight and overestimation parameter of WRBFAOO were set to 64 and 1, respectively. The weight reduction schedule of WRBFAOO was wi+1 = wi. The step size used by AFSE to control the partitioning of the factor sets propagated was set to 1 (we tried larger values without any significant difference in performance).