On the Online Generation of Effective Macro-Operators

Authors: Lukáš Chrpa, Mauro Vallati, Thomas Leo McCluskey

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluation with IPC benchmarks demonstrates performance improvement in a range of state-of-the-art planning engines, and provides insights into what macros can be generated without training.
Researcher Affiliation Academia University of Huddersfield {l.chrpa, m.vallati, t.l.mccluskey}@hud.ac.uk
Pseudocode Yes Algorithm 1 The OMA algorithm. (followed by pseudocode block)
Open Source Code No The paper does not provide any links to source code or explicitly state that source code is being released.
Open Datasets No The paper mentions using 'IPC benchmarks' (International Planning Competition benchmarks) and 'IPC-7', 'IPC-8' which are standard, but it does not provide concrete access information (e.g., a specific link, DOI, or formal citation with author/year for accessing the dataset itself).
Dataset Splits No The paper uses IPC benchmarks, which have predefined problem sets, but it does not explicitly specify how the data was split into training, validation, or test sets for its own experiments (e.g., exact percentages or sample counts). It refers to 'IPC-8' and 'IPC-7' tracks, implying existing benchmark setups.
Hardware Specification Yes All the experiments were run on 3.0 Ghz machine CPU with 4GB of RAM.
Software Dependencies No The paper mentions planners like 'Yahsp3, Mpc, Probe, Bfs-f, Cedalion, Freelunch, and Arvandherd' and 'Metric-FF, LPG-td, LAMA-11, Mp, and Probe', but it does not specify their version numbers or any other software dependencies with version numbers.
Experiment Setup Yes The constants c1, c2, l and k (see the previous section) were set to 0.4, 1.0, min(8, 2 |ops|) and min(4, |ops|) respectively for all the benchmarks. They were set by considering results on a small set of problems/domains.