Saturated Post-hoc Optimization for Classical Planning
Authors: Jendrik Seipp, Thomas Keller, Malte Helmert11947-11953
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We implemented saturated post-hoc optimization in the Fast Downward planning system (Helmert 2006) and used the Downward Lab toolkit (Seipp et al. 2017) for running experiments on Intel Xeon Silver 4114 processors. Our benchmark set consists of all 1827 tasks without conditional effects from the optimal sequential tracks of the International Planning Competitions 1998 2018. We limit time by 30 minutes and memory by 3.5 Gi B. All benchmarks, code and experiment data are published online (Seipp, Keller, and Helmert 2021). Table 1 compares the number of solved tasks by h Ph O and h SPh O for the four types of heuristic sets. We see that saturating the costs is beneļ¬cial for all considered abstraction heuristics: the overall coverage increases by 9, 51, 176 and 171 tasks, respectively. A per-domain analysis reveals that h SPh O solves more tasks than h Ph O in 6, 17, 20 and 20 domains across the four settings, while the opposite is true in only a single domain for HC and SYS and never for CART nor COMB. Figure 3 explains why h SPh O solves so many more tasks than h Ph O for the combination of abstraction heuristics (COMB) by comparing the number of expansions before the last f layer: saturating the costs often makes the resulting heuristic much more accurate. |
| Researcher Affiliation | Academia | Jendrik Seipp,1,2 Thomas Keller,1 Malte Helmert1 1University of Basel, Switzerland 2Link oping University, Sweden |
| Pseudocode | No | The paper describes algorithms and linear programs in textual and mathematical formulations, but it does not include explicitly labeled pseudocode blocks or algorithms. |
| Open Source Code | Yes | All benchmarks, code and experiment data are published online (Seipp, Keller, and Helmert 2021). Seipp, J.; Keller, T.; and Helmert, M. 2021. Code, benchmarks and experiment data for the AAAI 2021 paper Saturated Post-hoc Optimization for Classical Planning . https: //doi.org/10.5281/zenodo.4302051. |
| Open Datasets | Yes | Our benchmark set consists of all 1827 tasks without conditional effects from the optimal sequential tracks of the International Planning Competitions 1998 2018. All benchmarks, code and experiment data are published online (Seipp, Keller, and Helmert 2021). |
| Dataset Splits | No | The paper mentions using 'benchmark set' for evaluation and does not describe explicit training, validation, or test splits with percentages or counts for a larger dataset. It refers to a collection of tasks from competitions as its evaluation set. |
| Hardware Specification | Yes | We implemented saturated post-hoc optimization in the Fast Downward planning system (Helmert 2006) and used the Downward Lab toolkit (Seipp et al. 2017) for running experiments on Intel Xeon Silver 4114 processors. |
| Software Dependencies | No | The paper mentions using 'Fast Downward planning system (Helmert 2006)' and 'Downward Lab toolkit (Seipp et al. 2017)'. While these are software dependencies, specific version numbers are not provided, only general citations to the original publications. |
| Experiment Setup | Yes | We limit time by 30 minutes and memory by 3.5 Gi B. In our experiments we compute cost partitionings over four different sets of abstraction heuristics: pattern databases found by hill climbing (HC, Haslum et al. 2007), systematic pattern databases of sizes 1 and 2 (SYS, Pommerening, R oger, and Helmert 2013), Cartesian abstractions of landmark and goal task decompositions (CART, Seipp and Helmert 2018) and the combination of these three heuristic sets (COMB=HC+SYS+CART). |