Explainable Planner Selection for Classical Planning
Authors: Patrick Ferber, Jendrik Seipp9741-9749
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | For each task in our benchmark set, we compute the values of our features and measure the runtimes of a set of planners for the task. Then we use supervised learning to train models for planner selection. and Our experiments are structured as follows. |
| Researcher Affiliation | Academia | 1University of Basel, Basel, Switzerland 2Saarland University, Saarland Informatics Campus, Saarbr ucken, Germany 3Link oping University, Link oping, Sweden |
| Pseudocode | No | No pseudocode or algorithm blocks are provided in the paper. |
| Open Source Code | Yes | All our data sets, code, and experiment results are published online (Ferber and Seipp 2022). and in references: Ferber, P.; and Seipp, J. 2022. Code, data sets, and experiment data for the AAAI 2022 Paper Explainable Planner Selection for Classical Planning . https://doi.org/10.5281/ zenodo.5749959. |
| Open Datasets | Yes | To be comparable to previous work, we use the data set from Ferber et al. (2019), which contains both a list of benchmark tasks and their planner runtimes. and All our data sets, code, and experiment results are published online (Ferber and Seipp 2022). |
| Dataset Splits | Yes | For training and evaluating models we split the tasks into groups of training and test tasks. and All experiments except for the comparison to Delfi1 use 10-fold cross-validation, that is, we split the data into ten similarly-sized folds. |
| Hardware Specification | Yes | We run all experiments on single Intel Xeon Silver 4114 cores and limit memory usage to 3 Gi B. |
| Software Dependencies | No | No specific version numbers are provided for software dependencies like Python, TensorFlow, or scikit-learn. |
| Experiment Setup | Yes | First, we train plain linear regression models (Galton 1886) and linear regression models with L1 regularization (Tibshirani 1996) using regularization weights of 0.1, 1.0, 2.0 and 5.0. Second, we train random forests (Breiman 2001), i.e., ensembles of decision trees (Breiman et al. 1984). Linear regression and random forests internally train an independent model for each planner. Finally, we train fully-connected multi-layer perceptrons (MLP) with 3 and 5 layers. and We use the Adam optimizer (Kingma and Ba 2015) with a learning rate of 0.001 to optimize the weights. For the networks that predict the time or logtime we use the Re LU activation function and the mean squared error. For the networks that predict the binary label we use the Sigmoid activation function and the cross entropy loss. |