LP Heuristics over Conjunctions: Compilation, Convergence, Nogood Learning

Authors: Marcel Steinmetz, Joerg Hoffmann

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on IPC benchmarks show significant performance improvements in several domains.
Researcher Affiliation Academia Marcel Steinmetz and J org Hoffmann Saarland University, Saarland Informatics Campus, Saarbr ucken, Germany {steinmetz,hoffmann}@cs.uni-saarland.de
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes We use the UIPC 16 benchmarks, as well as unsolvable resource-constrained (RCP) benchmarks [Nakhost et al., 2012; Steinmetz and Hoffmann, 2017].
Dataset Splits No The paper uses standard benchmarks but does not specify explicit training/validation/test dataset splits, percentages, or sample counts.
Hardware Specification Yes All experiments were run on machines equipped with Intel Xeon E5-2660 CPUs, with runtime (memory) limits of 30 minutes (4 GB).
Software Dependencies No The paper mentions 'Fast Downward (FD) [Helmert, 2006]' as the implementation environment but does not specify version numbers for FD or any other software dependencies.
Experiment Setup Yes Similar to earlier works on the ΠC-compilation [Keyder et al., 2014], we cope with the worst-case explosion by imposing a size limit M on the ratio |AC|/|A|. Once ΠC reaches the limit M, we disable the generation of new conjunctions. We experimented with M {2, 4, 8, . . . , 1024, }, where for M = the size of ΠC is not limited. To counteract this brittleness, all our configurations in what follows combine the five variable orders, maintaining for each a separate conjunction set.