On the Effective Configuration of Planning Domain Models
Authors: Mauro Vallati, Frank Hutter, Lukas Chrpa, Thomas Leo McCluskey
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we investigate how the performance of planners is affected by domain model configuration. We introduce a fully automated method for this configuration task, and show in an extensive experimental analysis with six planners and seven domains that this process (which can, in principle, be combined with other forms of reformulation and configuration) can have a remarkable impact on performance across planners. Furthermore, studying the obtained domain model configurations can provide useful information to effectively engineer planning domain models.3 Experimental Analysis Our experimental analysis aims to evaluate the impact that domain model configuration, as described in the previous section, has on state-of-the-art planning systems. |
| Researcher Affiliation | Academia | Mauro Vallati University of Huddersfield m.vallati@hud.ac.uk Frank Hutter University of Freiburg fh@informatik.uni-freiburg.de Luk aˇs Chrpa and Thomas L. Mc Cluskey University of Huddersfield {l.chrpa,t.l.mccluskey}@hud.ac.uk |
| Pseudocode | No | The paper describes algorithms and procedures in prose, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper refers to the public availability of SMAC (http://www.aclib.net/SMAC), a tool they used, but does not provide any statement or link for the open-source code of their own methodology or implementation described in the paper. |
| Open Datasets | No | The paper states 'For each domain we study, we created approximately 550 random instances with the domain s random instance generator.' It mentions domains used in IPCs but does not provide concrete access information (link, DOI, specific repository, or formal citation with authors/year) for these generated instances or any other publicly available dataset used for training. |
| Dataset Splits | No | The paper states, 'We split these instances into a training set (roughly 500 instances) and a test set (roughly 50 instances) in order to obtain an unbiased estimate of generalisation performance to previously unseen instances from the same distribution.' It does not explicitly provide details for a distinct validation set. |
| Hardware Specification | Yes | We performed experiments on AMD Opteron TM machines with 2.4 GHz, 8 GB of RAM and Linux operating system. |
| Software Dependencies | Yes | Configuration of domain models was done using SMAC version 2.08 (publicly available at http://www.aclib.net/SMAC). |
| Experiment Setup | Yes | The performance metric we optimised trades off coverage (# instances for which we find a plan) and runtime for successful instances; specifically, we minimise Penalized Average Runtime (PAR), counting runs that crash or do not find a plan as ten times the cutoff time (PAR10). ... Each of our configuration runs was limited to a single core, and was given an overall runtime and memory limits of 5 days and 8GB, respectively. As in the Agile track of the IPC 2014, the cutoff time for each instance, both for training and testing purposes, was 300 seconds. |