Learning Features and Abstract Actions for Computing Generalized Plans

Authors: Blai Bonet, Guillem Francès, Hector Geffner2703-2710

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental Results We evaluate the computational model on four generalized problems Q. For each Q, we select a few training instances P in Q by hand, from which the sample sets S are drawn. S is constructed by collecting the first m states generated by a breadth-first search, along with the states generated in an optimal plan. The plans ensure that S contains some goal states and provide the state transitions that are marked as goal relevant when constructing the theory TG(S, F), which is the one used in the experiments. S is closed by fully expanding the states selected. The value of m is chosen so that the resulting number of transitions in S, which depends on the branching factor, is around 500. The bound k for F = Fk is set to 8. Distance features dist are used only in the last problem. The Weighted-Max Solver is Open-WBO (Martins, Manquinho, and Lynce 2014) and the FOND planner is SAT-FOND (Geffner and Geffner 2018). The translation from Q F to Q+ F is very fast, in the order of 0.01 seconds in all cases. The whole computational pipeline summarized by the steps 1 6 above is processed on Intel Xeon E5-2660 CPUs with time and memory cutoffs of 1h and 32GB. Table 1 summarizes the relevant data for the problems, including the size of the CNF encodings corresponding to the theories T and TG.
Researcher Affiliation Academia Blai Bonet Universidad Sim on Bol ıvar Caracas, Venezuela bonet@usb.ve Guillem Franc es University of Basel Basel, Switzerland guillem.frances@unibas.ch Hector Geffner ICREA & Universitat Pompeu Fabra Barcelona, Spain hector.geffner@upf.edu
Pseudocode No The paper describes computational steps but does not include a clearly labeled pseudocode or algorithm block.
Open Source Code No The paper refers to a 'Translator available at https://github.com/bonetblai/qnp2fond' for a specific conversion step, but no explicit statement or link is provided for the open-source code of the full methodology described in the paper (e.g., the feature learning using Max-SAT).
Open Datasets No The paper refers to instances from domains like 'Blocksworld', 'Qgripper', and 'Qreward' and states 'we select a few training instances P in Q by hand', but does not provide specific access information (links, DOIs, repositories, or formal citations with authors and year) for these instances or the sample sets derived from them.
Dataset Splits No The paper mentions 'training instances' and 'sample sets S are drawn' but does not provide specific dataset split information (exact percentages, sample counts, or citations to predefined splits) needed to reproduce the data partitioning.
Hardware Specification Yes The whole computational pipeline summarized by the steps 1 6 above is processed on Intel Xeon E5-2660 CPUs with time and memory cutoffs of 1h and 32GB.
Software Dependencies Yes The Weighted-Max Solver is Open-WBO (Martins, Manquinho, and Lynce 2014) and the FOND planner is SAT-FOND (Geffner and Geffner 2018).
Experiment Setup Yes The value of m is chosen so that the resulting number of transitions in S, which depends on the branching factor, is around 500. The bound k for F = Fk is set to 8.