Exploiting Block Deordering for Improving Planners Efficiency

Authors: Lukáš Chrpa, Fazlul Hasan Siddiqui

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method is evaluated by using the IPC benchmarks with state-of-the-art planning engines, and shows considerable improvement in many cases. We experimentally evaluated BLOMA in order to demonstrate how it improves against the original and MUM enhanced domain and problem models.
Researcher Affiliation Academia Luk aˇs Chrpa PARK Research group School of Computing & Engineering University of Huddersfield Fazlul Hasan Siddiqui NICTA Optimisation Research Group Research School of Computer Science The Australian National University, Australia
Pseudocode Yes Algorithm 1 Computing extended blocks. Algorithm 2 The high-level design of BLOMA
Open Source Code No The paper does not provide any explicit statements about releasing source code for the described methodology or links to code repositories.
Open Datasets Yes We used all the domains from the learning track of IPC-7. The training problems were rather simple but not trivial, so the plan length was mostly within 40-80 steps.
Dataset Splits No The paper mentions 'training problems' and 're-generated training plans' for filtering macros, but it does not specify explicit validation dataset splits (e.g., percentages or counts for a distinct validation set) or cross-validation setup.
Hardware Specification Yes All the experiments were run on Intel Xeon 2.53 Ghz with 2GB of RAM, Cent OS 6.5.
Software Dependencies No The paper mentions several state-of-the-art planners used for experiments: LAMA [Richter and Westphal, 2010], Mp C [Rintanen, 2014], Probe [Lipovetzky et al., 2014], Mercury [Katz and Hoffmann, 2014], Yahsp3 [Vidal, 2014], and Bfsf [Lipovetzky et al., 2014]. However, specific version numbers for these software dependencies are not provided in the text.
Experiment Setup Yes The parameters pb and pp were both set to 0.5.