Completeness-Preserving Dominance Techniques for Satisficing Planning

Authors: Álvaro Torralba

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We run experiments on all satisficing-track STRIPS planning instances from the international planning competitions (IPC 98 IPC 14). All experiments were conducted on a cluster of Intel Xeon E5-2650v3 machines with time (memory) cut-offs of 30 minutes (4 GB). Our goal is to evaluate the potential of current dominance techniques to enhance search in satisficing planning. As a simple baseline, we use lazy GBFS in Fast Downward [Helmert, 2006] with the h FF heuristic [Hoffmann and Nebel, 2001], and compare the results against dominance-based EHC guided with blind search (h B) and the h FF heuristic. We also include the performance of LAMA [Richter and Westphal, 2010] and Mercury [Katz and Hoffmann, 2014; Domshlak et al., 2015] as representatives of more modern planners.
Researcher Affiliation Academia Alvaro Torralba Saarland University, Saarland Informatics Campus, Saarbr ucken, Germany torralba@cs.uni-saarland.de
Pseudocode No The paper describes algorithms verbally and through definitions (e.g., DEHC (X) algorithm) but does not include a formally labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We run experiments on all satisficing-track STRIPS planning instances from the international planning competitions (IPC 98 IPC 14).
Dataset Splits No The paper mentions using planning instances from the International Planning Competitions (IPC), but it does not specify explicit training, validation, or test dataset splits. In classical planning, these instances are typically solved from initial to goal states rather than being split like traditional machine learning datasets.
Hardware Specification Yes All experiments were conducted on a cluster of Intel Xeon E5-2650v3 machines with time (memory) cut-offs of 30 minutes (4 GB).
Software Dependencies No The paper mentions software tools like 'Fast Downward', 'LAMA', and 'Mercury', but it does not provide specific version numbers for these or any other software dependencies.
Experiment Setup No The paper mentions 'time (memory) cut-offs of 30 minutes (4 GB)' for experiments and the use of specific planners and heuristics, but it does not provide concrete details like hyperparameters (e.g., learning rates, batch sizes) or other fine-grained system-level training settings.