Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Lazy Model Expansion: Interleaving Grounding with Search

Authors: Broes De Cat, Marc Denecker, Maurice Bruynooghe, Peter Stuckey

JAIR 2015 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6. Experiments The IDP system has a state-of-the-art model expansion engine, as can be observed from previous Answer-Set Programming competitions (Denecker et al., 2009; Calimeri et al., 2014; Alviano et al., 2013). The lazy model expansion algorithms presented in this paper were implemented in the IDP system, by extending the existing algorithms (De Cat, Bogaerts, Devriendt, & Denecker, 2013). The number of solved instances and average time are shown in Table 1; the average grounding size for the IDP setup is shown in Table 2.
Researcher Affiliation Collaboration Broes De Cat EMAIL OM Partners, Belgium Marc Denecker EMAIL Dept. Computer Science, KULeuven, Belgium Peter Stuckey EMAIL National ICT Australia and Dept. of Computing and Information Systems The University of Melbourne, Australia Maurice Bruynooghe EMAIL Dept. Computer Science, KULeuven, Belgium
Pseudocode Yes Algorithm 1: The one_step_ground algorithm. 1 Function one_step_ground (formula or rule ϕ) ... Algorithm 2: The lazy_mx lazy model expansion algorithm. 1 Function lazy_mx (atomic sentence PT , de nition , structure Iin) ...
Open Source Code No The paper only states that the algorithms were implemented in the IDP system and mentions IDP's public distribution for a specific meta-level specification, but does not provide an unambiguous statement of release for the specific source code of the described methodology or a direct link to its repository.
Open Datasets Yes For each of these, we used all instances from the 2011 and 2013 competitions, except for the 2013 Reachability instances... For Stable Marriage, Graph Colouring and Reachability, we based our encodings on the available ASP-Core-2 encodings. For Packing and Disjunctive Scheduling, we constructed a natural FO( )IDP encoding and made a faithful translation to ASP. For the more complex benchmarks of Labyrinth and Sokoban, we used the original FO( )IDP and Gringo-Clasp's ASP speci cations submitted to the 2011 competition.
Dataset Splits No The paper uses instances from ASP competitions, which are problem instances designed for evaluation, not datasets requiring training/test/validation splits in the traditional machine learning context. Therefore, no such splits are provided or required.
Hardware Specification Yes The experiments for Sections 6.1 and 6.3 were run on a 64-bit Ubuntu 13.10 system with a quad-core 2.53 GHz processor and 8 GB of RAM. Experiments for Section 6.2 were run on a 64-bit Ubuntu 12.10 system with a 24-core 2.40-Ghz processor and 128 GB of RAM.
Software Dependencies Yes We used IDP version 3.2.1-lazy, Gringo 3.0.5 and Clasp 2.1.2-st.
Experiment Setup Yes For an existential quanti cation, 10 instantiations are grounded at a time; for a disjunction, 3 disjuncts are grounded at a time. This turned out to give the best balance between introducing too many Tseitin atoms and grounding too much. The initial truth value is t with probability 0.2 and f otherwise. The initial threshold for randomized restarts is 100 extensions of the ground theory. It is doubled after each restart. A formula is considered small if its estimated grounding size is below 104 atoms.