totSAT – Totally-Ordered Hierarchical Planning Through SAT

Authors: Gregor Behnke, Daniel Höller, Susanne Biundo

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Furthermore, we have conducted an extensive empirical evaluation to compare our new planner against state-of-the-art HTN planners. It shows that our technique outperforms any of these systems.
Researcher Affiliation Academia Gregor Behnke, Daniel H oller, Susanne Biundo Institute of Artificial Intelligence, Ulm University, D-89069 Ulm, Germany {gregor.behnke, daniel.hoeller, susanne.biundo}@uni-ulm.de
Pseudocode No The paper describes algorithms and formulae but does not provide structured pseudocode blocks or algorithms labeled as such.
Open Source Code Yes Our implementation of tot SAT uses the parser and preprocessor of the planning system PANDA (Bercher, Keen, and Biundo 2014). We will release the code of tot SAT publicly.
Open Datasets No The paper refers to
Dataset Splits No The paper mentions using domains and instances (e.g., UM-Translog, Woodworking, Satellite, Smart Phone, ENTERTAINMENT, ROVER, TRANSPORT) for evaluation but does not specify training, validation, or test dataset splits. It describes how instances were created or adapted, but not how they were partitioned for evaluation.
Hardware Specification Yes Each planner was given 10 minutes runtime and 4 GB of RAM per instance on an Intel Xeon E5-2660.
Software Dependencies No The paper mentions several software tools and systems (e.g., PANDA, SHOP, HTN2STRIPS, jasper, cryptominisat5, Maple COMSPS, Riss6, minisat) but does not consistently provide specific version numbers for these dependencies to ensure reproducibility.
Experiment Setup No The paper mentions general experimental conditions like runtime and RAM limits ("10 minutes runtime and 4 GB of RAM per instance") and discusses the timeout setting for the SAT solver ("always set the timeout of the solver to the remaining runtime"). However, it does not provide specific hyperparameters like learning rates, batch sizes, or optimizer settings, which are typically found in experimental setup sections.