Intelligent Belief State Sampling for Conformant Planning
Authors: Alban Grastien, Enrico Scala
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that this approach is competitive on a class of problems that are hard for traditional planners, and also returns generally shorter plans. We test the planner on a large set of benchmarks. We notice that our planner is able to handle problems that are traditionally hard for existing planners, in particular problems with large width . These results are very encouraging particularly given the simplicity of the approach, and the simplicity of the resulting developed software architecture. |
| Researcher Affiliation | Collaboration | Alban Grastien1,2, Enrico Scala2,3 1 Data61, Canberra, Australia 2 The Australian National University, Canberra, Australia 3 Fondazione Bruno Kessler, Trento, Italy |
| Pseudocode | Yes | Algorithm 1 Conformant Planning Algorithm. 1: input: planning problem P 2: B := {Initial sample of the belief state} 3: π := ε {Empty plan is a plan for belief state } 4: loop 5: if π is a solution for P then 6: return π {Plan found} 7: end if 8: Let q be a counter-example 9: B := B q 10: Compute new plan π for PB 11: if no such π exists then 12: return no plan exists 13: end if 14: end loop |
| Open Source Code | Yes | Our system is called CPCES (Conformant Planner via Counter Example and Sampling)3. Its architecture has been conceived to be modular and independent from the particular solvers. 3The system is available at https://bitbucket.org/ enricode/cpces |
| Open Datasets | Yes | We took a set of domains from Albore et al. [2011]. In particular we focus our attention on two categories: domains having width strictly larger than 1 that are notoriously difficult for all the state of the art planning systems, and domains with a width less than or equal 1. The first set of domains consists of BLOCKSWORLD, RAO S KEYS, ONE-DISPOSE and LOOK-GRAB. As we will see, none of the planners we are aware of has been able to efficiently solve those instances, or to prove their unsolvability. In the second set of domains we consider: DISPOSE, BOMB, COINS and UTS. |
| Dataset Splits | No | The paper mentions using specific benchmark domains and instances for evaluation but does not specify any training/validation/test splits, typical for planning problem evaluation rather than machine learning models. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as CPU or GPU models, or memory specifications. |
| Software Dependencies | No | The paper mentions the use of FAST FORWARD (FF) and Z3, as well as PDDL, SMT-LIB2, and Java. However, it does not provide specific version numbers for any of these software components, which is necessary for reproducible software dependencies. |
| Experiment Setup | Yes | Table 1 reports the data collected for the two systems across many instances of the aforementioned domains, where we compared coverage, plan-quality and run-time spent to find a solution. Time is represented in seconds, timeout has been set to 1800 secs. |