Heuristic Subset Selection in Classical Planning
Authors: Levi H. S. Lelis, Santiago Franco, Marvin Abisrror, Mike Barley, Sandra Zilles, Robert C. Holte
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using the problems from the 2011 International Planning Competition (IPC), we empirically evaluate GHS in optimal classical planning problems while minimizing J, T, and maximizing the sum of heuristic values in the state space. Our experiments show that the subsets chosen by our algorithm can be far superior, in terms of coverage, to deļ¬ning hmax over the entire collection and to state-of-the-art methods. |
| Researcher Affiliation | Academia | Levi H. S. Lelis Santiago Franco Marvin Abisrror Dept. de Inform atica Universidade Federal de Vic osa, Brazil; Mike Barley Comp. Science Dept. Auckland University Auckland, New Zealand; Sandra Zilles Dept. of Comp. Science University of Regina Regina, Canada; Robert C. Holte Dept. Comp. Science University of Alberta Edmonton, Canada |
| Pseudocode | Yes | Algorithm 1 Greedy Heuristic Selection |
| Open Source Code | No | The paper states: "we have implemented GHS in Fast Downward [Helmert, 2006]" but does not provide a link or explicit statement about the availability of their specific implementation's source code. |
| Open Datasets | Yes | Using the problems from the 2011 International Planning Competition (IPC) |
| Dataset Splits | No | The paper does not provide specific training/validation/test dataset splits (e.g., percentages or counts) for the IPC instances used in the experiments. |
| Hardware Specification | No | The paper states: "All experiments are run on 2.67 GHz machines with 4 GB" but does not provide specific CPU or GPU models, or other detailed hardware specifications. |
| Software Dependencies | No | The paper mentions software like "Fast Downward [Helmert, 2006]" and uses "the preprocessor described by Alc azar and Torralba [2015]" but does not provide specific version numbers for these or other software components used. |
| Experiment Setup | Yes | The process of generating is limited by 1GB of memory and 600 seconds. The sampling procedure is bounded by a 300-second time limit for both CS and SS. We use the number of SS probes p = 500 in all our experiments. For GHS we allow 1,200 seconds in total for both selecting 0 and for running A* with hmax( 0), and for Max we allow 1,200 seconds for running A* with hmax( ). We use one third of 600 seconds to generate GAPDBs with each of the following maximum numbers of entries: {2 104, 2 105, 2 106}. |