Automatic Generation of High-Level State Features for Generalized Planning

Authors: Damir Lotinac, Javier Segovia-Aguas, Sergio Jiménez, Anders Jonsson

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that we generate features for diverse generalized planning problems and hence, compute generalized plans without providing a prior high-level representation of the states. We also bring a new landscape of challenging benchmarks to classical planning since our compilation naturally models classification tasks as classical planning problems. In all experiments, we run the classical planner Fast Downward [Helmert, 2006] with the LAMA-2011 setting [Richter and Westphal, 2010] on a Intel Core i5 3.10GHz x 4 with a 4GB memory bound and time limit of 3600s. Table 1 summarizes the obtained results.
Researcher Affiliation Academia Damir Lotinac and Javier Segovia-Aguas and Sergio Jim enez and Anders Jonsson Dept. Information and Communication Technologies, Universitat Pompeu Fabra Roc Boronat 138, 08018 Barcelona, Spain {damir.lotinac,javier.segovia,sergio.jimenez,anders.jonsson}@upf.edu
Pseudocode Yes Figure 1: Planning program for finding the minimum element in a list of integers of size n. Instructions on lines 0 and 3, represented with diamonds, are conditional goto instructions that, respectively, jump to line 2 when i j and to line 0 when i 6= n. The outgoing left branch of a diamond indicates that the condition holds and the right branch that it does not. Instructions on lines 1 and 2 are sequential instructions and are represented with boxes. Finally, end marks the program termination.
Open Source Code No The paper does not provide any concrete access (link or explicit statement) to open-source code for the described methodology.
Open Datasets Yes This model is particularly natural for classification tasks in which both the examples and the classifier are described using logic. Michalski s train [Michalski et al., 2013] is a good example of such tasks.
Dataset Splits No The paper mentions evaluating on benchmarks and provides results in Table 1, but it does not specify any training, validation, or test dataset splits, percentages, or sample counts.
Hardware Specification Yes In all experiments, we run the classical planner Fast Downward [Helmert, 2006] with the LAMA-2011 setting [Richter and Westphal, 2010] on a Intel Core i5 3.10GHz x 4 with a 4GB memory bound and time limit of 3600s.
Software Dependencies No The paper mentions using 'Fast Downward [Helmert, 2006]' and 'LAMA-2011 setting [Richter and Westphal, 2010]'. While these identify the software, they refer to the publication years of the papers describing them rather than explicit version numbers for the software components themselves (e.g., Fast Downward vX.Y).
Experiment Setup Yes In all experiments, we run the classical planner Fast Downward [Helmert, 2006] with the LAMA-2011 setting [Richter and Westphal, 2010] on a Intel Core i5 3.10GHz x 4 with a 4GB memory bound and time limit of 3600s.