Look-Ahead with Mini-Bucket Heuristics for MPE

Authors: Rina Dechter, Kalev Kask, William Lam, Javier Larrosa

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 4, we present the experiments and discussion. and Experimental Evaluation
Researcher Affiliation Academia Rina Dechter, Kalev Kask, William Lam University of California, Irvine Irvine, California, USA Javier Larrosa UPC Barcelona Tech Barcelona, Spain
Pseudocode Yes Algorithm 1: Bucket Error Evaluation (BEE) and Algorithm 2: Minimal Pruned Look-Ahead Subtree
Open Source Code No No explicit statement or link is provided for open-source code related to the paper's methodology.
Open Datasets Yes Benchmarks. Includes instances from genetic linkage analysis (pedigree, large Fam) and medical diagnosis (promedas) (see Table 2). and Table 2: Benchmark statistics.
Dataset Splits No The paper uses benchmark instances but does not provide specific train/validation/test dataset splits.
Hardware Specification No The paper mentions a memory limit of 4Gb for constructing the MBE heuristic, but does not specify any particular hardware components (e.g., GPU, CPU models) used for running experiments.
Software Dependencies No The paper mentions that 'Current implementations of AOBB (Kask and Dechter 2001; Marinescu and Dechter 2009) are guided by the mini-bucket heuristic' but does not provide specific software dependencies with version numbers for its own experimental setup.
Experiment Setup Yes We tried 2-3 different i-bounds for each problem instance... Within each i-bound setting, we varied the look-ahead depth from 0 to 6. and We run Algorithm 2 for pre-processing, yielding a minimal pruned look-ahead subtree for each variable, When the BEE computation (and its table) gets too large (e.g. over 106) we sample (e.g. 105 assignments).