Anytime Recursive Best-First Search for Bounding Marginal MAP
Authors: Radu Marinescu, Akihiro Kishimoto, Adi Botea, Rina Dechter, Alexander Ihler7924-7932
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | An empirical evaluation demonstrates the effectiveness of our proposed approach against current solvers. |
| Researcher Affiliation | Collaboration | 1IBM Research 2University of California, Irvine |
| Pseudocode | Yes | Algorithm 1 describes the node generation and node values propagation during search, while Algorithms 2 and 3 show how the best child node and, respectively, an unsolved child node is selected for expansion. |
| Open Source Code | No | The paper does not provide any specific link to source code or an explicit statement about its availability (e.g., 'code available at', 'we release our code'). |
| Open Datasets | Yes | Our benchmark set includes 3 standard problem domains from grid networks (grid), genetic linkage analysis (pedigree), and medical diagnosis expert systems (promedas). Since the original problems are pure MAP tasks, we generated 5 synthetic MMAP instances for each pure MAP instance by randomly selecting 50% of the variables as MAP variables as suggested previously in (Lee et al. 2016; Marinescu et al. 2017). Therefore, we created 75 grid, 110 pedigree, and 50 promedas MMAP instances. Table 1 shows the typical ranges of the problem instance parameters where n is the number of variables, k is the domain size, w c is the constrained induced width and w s is the induced width of the conditioned summation subproblem (we also include in parenthesis the average values of the latter two measures). |
| Dataset Splits | No | The paper describes experiments on 'benchmark problems' and 'instances' but does not specify explicit train/validation/test dataset splits or cross-validation methods for these instances. |
| Hardware Specification | No | The paper mentions '20GB of RAM memory limit' and 'time limit was set to 1 hour' but does not provide specific details on CPU, GPU models, or other hardware specifications used for experiments. |
| Software Dependencies | No | The paper refers to using 'WMBu(i)' and 'WMBl(10)' schemes but does not list specific software packages or libraries with version numbers required for reproducibility. |
| Experiment Setup | Yes | For RBFAOO+ and RBFAOOwe set the overestimation parameter δ to 2.0 and their garbage collection scheme replaced R = 30% of the cache entries. The cutoff parameter θ used by LAOBF to trigger the depth-first lookaheads was set to 1000, while Any SBFS and Any LDFS were run with the default parameters specified in (Marinescu, Dechter, and Ihler 2018). |