Towards Clause-Learning State Space Search: Learning to Recognize Dead-Ends
Authors: Marcel Steinmetz, Joerg Hoffmann
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that this can be quite powerful. On problems where dead-ends abound, the learning reliably reduces the search space by several orders of magnitude. and Figure 2: Empirical results. Left: Coverage. Best per-domain results highlighted in boldface. DFS-CL is our approach, α = with unlimited learning, α = 1 without learning. Other abbreviations see text. For each domain, there are 30 or 5 base instances as specified (union of Nakhost et al. s small and large test suites, for No Mystery and Rovers). For each base instance and value of W, the resource budget is set according to W. Right: Search space size for DFS-CL with learning (α = , y-axis) vs. without learning (α = 1, x-axis). |
| Researcher Affiliation | Academia | Marcel Steinmetz and J org Hoffmann Saarland University Saarbr ucken, Germany {steinmetz,hoffmann}@cs.uni-saarland.de |
| Pseudocode | Yes | Algorithm 1: Generic forward search algorithm with dead-end identification and learning. and Algorithm 2: Refining C for ˆS with recognized neighbors ˆT. |
| Open Source Code | No | The paper does not provide any specific link, explicit statement, or mention of supplementary materials for the open-source code of the methodology described. It only states, "Our implementation is in FD (Helmert 2006)," referring to a third-party planner. |
| Open Datasets | Yes | We use the benchmarks by Nakhost et al. (2012), which are especially suited as they are controlled: the minimum required budget bmin is known, and the actual budget is set to W bmin. and union of Nakhost et al. s small and large test suites, for No Mystery and Rovers). |
| Dataset Splits | No | The paper evaluates its state-space search method on planning benchmarks (problem instances) but does not describe training, validation, or test dataset splits in the typical machine learning context. |
| Hardware Specification | Yes | We use a cluster of Intel E5-2660 machines running at 2.20 GHz, with runtime (memory) limits of 30 minutes (4 GB). |
| Software Dependencies | Yes | Our implementation is in FD (Helmert 2006). |
| Experiment Setup | Yes | We break ties (i. e. order children in the DFS) using h FF (Hoffmann and Nebel 2001)). and We always start with C = C1 := {{p} | p F}... and We experimented with an input parameter α, stopping the learning when the number of counters reaches α times the number of counters for C1. We experimented with α {1, 16, 32, 64, 128, }. and As a standard satisficing planner, we run FD s greedy best-first dual-queue search with h FF and preferred operators, denoted FD-h FF . |