Delete- and Ordering-Relaxation Heuristics for HTN Planning

Authors: Daniel Höller, Pascal Bercher, Gregor Behnke

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that our heuristics are competitive with state-of-the-art heuristics in terms of coverage, but much more informed. [...] 6 Evaluation We integrated our heuristic into the PANDA framework [Bercher et al., 2014] and combined it with the progression algorithm by H oller et al. [2020]. [...] The coverage of all systems is given in Figure 3 (left).
Researcher Affiliation Academia 1Saarland University, Saarland Informatics Campus 2The Australian National University, College of Engineering and Computer Science 3University of Freiburg 4Ulm University, Institute of Artificial Intelligence
Pseudocode No The paper describes its Integer Programming model and various constraints, but it does not contain a dedicated pseudocode block or algorithm section.
Open Source Code No Source code is available at panda.hierarchical-task.net (This link refers to the general PANDA framework into which the authors integrated their work, not explicitly the source code for the novel heuristics presented in this paper.)
Open Datasets No We used the same problem set used in related work [H oller et al., 2018; Behnke et al., 2018; Behnke et al., 2019] including 144 instances. (While it references a problem set used in prior work, it does not provide a direct link, DOI, or repository for accessing this dataset in the context of this paper.)
Dataset Splits No The paper mentions using a 'problem set' of 144 instances but does not provide specific dataset split information (e.g., percentages, sample counts, or citations to predefined splits) for training, validation, or test sets.
Hardware Specification Yes We used a server with Xeon E5-2660 CPUs (2.60 GHz), 4 GB RAM and 10 minutes time limit.
Software Dependencies Yes Our IP model was solved using the CPLEX solver (version 12.8, restricted to 1 CPU core).
Experiment Setup No The paper describes various heuristic configurations (e.g., hdor, hdor lp) and search strategies (e.g., Greedy Best First, A*, GA*), but it does not provide specific hyperparameters like learning rates, batch sizes, or optimizer settings for the experimental setup.