Deep Learning for Cost-Optimal Planning: Task-Dependent Planner Selection

Authors: Silvan Sievers, Michael Katz, Shirin Sohrabi, Horst Samulowitz, Patrick Ferber7715-7723

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We explore some of the questions that inevitably rise when applying such a technique, and present various ways of building practically useful online portfolio-based planners. An evidence of the usefulness of our proposed technique is a planner that won the cost-optimal track of the International Planning Competition 2018. Experimental Evaluation With the hyper-parameters for the 24 settings fixed, we train the models either on a subset of the training set according to the split strategy of the setting, or on the entire training data (no validation).
Researcher Affiliation Collaboration Silvan Sievers University of Basel Basel, Switzerland silvan.sievers@unibas.ch Michael Katz IBM Research Yorktown Heights, NY, USA michael.katz1@ibm.com Shirin Sohrabi, Horst Samulowitz IBM Research Yorktown Heights, NY, USA {ssohrab,samulowitz}@us.ibm.com Patrick Ferber University of Basel Basel, Switzerland patrick.ferber@unibas.ch
Pseudocode No The paper does not contain any sections or blocks explicitly labeled 'Pseudocode' or 'Algorithm'.
Open Source Code No The paper states: 'The data set described in this section is available online.1' with footnote '1https://github.com/IBM/IPC-image-data'. This link is for the dataset, not the open-source code for the methodology described in the paper.
Open Datasets Yes Our collection of tasks includes all benchmarks of the classical tracks of all IPCs as well as some domains from the learning tracks. The data set described in this section is available online.1 (Footnote 1: https://github.com/IBM/IPC-image-data)
Dataset Splits Yes Each step of the algorithm uses a 5-fold cross-validation, separating the training data into 5 subsets consisting of 20% of the data each. We call the two variants random split and domain-preserving split.
Hardware Specification Yes The training is performed on NVIDIA(R) Tesla(R) K80 GPUs.
Software Dependencies No The paper mentions 'Our tool of choice for training is Keras (Chollet 2015) with Tensorflow as a back end.' However, it does not provide specific version numbers for Keras, TensorFlow, or any other software dependencies.
Experiment Setup Yes Tables 1 and 2 summarize the parameters obtained for these 24 settings.