Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Residual-Guided Look-Ahead in AND/OR Search for Graphical Models
Authors: William Lam, Kalev Kask, Javier Larrosa, Rina Dechter
JAIR 2017 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the impact of the proposed schemes in an extensive empirical evaluation for both finding exact solutions and anytime suboptimal solutions. 5. Experimental Evaluation We now evaluate empirically the impact of our look-ahead scheme in branch and bound depth first for different look-ahead depths. In Subsection 5.1 we consider the problem of finding the optimal solution and proving its optimality which is important in not-so-hard instances. The main efficiency measure here is cpu time to complete the execution. In Subsection 5.2 we consider the problem of obtaining near-optimal solutions in an anytime manner which is important when dealing with hard instances that can not be solved within reasonable time limits. |
| Researcher Affiliation | Academia | William Lam EMAIL Kalev Kask EMAIL Dept. of Computer Science, Univ. of California, Irvine Irvine, CA 92697, USA Javier Larrosa EMAIL Dept. of Computer Science, UPC Barcelona Tech, Spain Rina Dechter EMAIL Dept. of Computer Science, Univ. of California, Irvine Irvine, CA 92697, USA |
| Pseudocode | Yes | Algorithm 1 presents pseudo-code for AOBB. Algorithm 2: Mini-Bucket Elimination (Dechter & Rish, 2002) Algorithm 3: Local Bucket Error Evaluation (LBEE) Algorithm 4: Compile ϵ-pruned Look-ahead Subtrees (Compile PLS(ϵ)) Algorithm 5: Look-ahead Heuristic for MBE (MBE-Look-ahead) |
| Open Source Code | Yes | Both the baseline and our approach were based on a branch of the DAOOPT code which is implemented in C++ (64-bit) (Otten, 2013; Lam, 2017b). Lam, W. (2017b). https://github.com/willmlam/daoopt-exp.. |
| Open Datasets | Yes | We used benchmarks from the UAI and PASCAL2 competitions. In particular we considered instances from genetic linkage analysis (Pedigree, Large Fam3, Type4) (Fishelson & Geiger, 2004), medical diagnosis (Promedas) (Wemmenhove, Mooij, Wiegerinck, Leisink, Kappen, & Neijt, 2007), deep belief networks (DBN), and binary grids (Grids). |
| Dataset Splits | No | The paper uses benchmark instances (e.g., Pedigree, Large Fam3, Promedas, DBN, Grids) for its experimental evaluation. It discusses selecting subsets of these instances and varying i-bounds and look-ahead depths. However, it does not specify any training, validation, or test dataset splits, as the focus is on search and optimization performance on problem instances rather than a machine learning task requiring such splits. |
| Hardware Specification | Yes | Experiments were run on an Intel Xeon X5650 2.66GHz processor, with a 4GB memory limit for each job. |
| Software Dependencies | No | The paper states that the code is "implemented in C++ (64-bit)" and mentions it is "based on a branch of the DAOOPT code". While C++ is a programming language, no specific version numbers for compilers, libraries, or the DAOOPT code itself are provided, which is necessary for full reproducibility. |
| Experiment Setup | Yes | For each instance we conducted experiments with 3 different i-bounds: the highest one fitting in memory, the lowest one that allowed to solve the instance with the baseline in less than 7200 seconds, and another one in between. We vary the look-ahead depth from 1 to 6 and used a fixed ϵ of 0.01. The time limit for every experiment was bounded to 2 hours (7200 seconds). |