Exploiting Justifications for Lazy Grounding of Answer Set Programs
Authors: Bart Bogaerts, Antonius Weinzierl
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We implemented the justification analysis in ALPHA4 and present the results of our experiments. The benchmarks were run on a cluster of Linux machines with Intel Xeon E5-2680 v3 CPUs. |
| Researcher Affiliation | Academia | KU Leuven, Department of Computer Science, Celestijnenlaan 200A, Leuven, Belgium Aalto University, Department of Computer Science, FI-00076 AALTO, Finland |
| Pseudocode | Yes | Algorithm 1: ANALYZE: High level overview of the justification-conflict analysis. Algorithm 2: EXPLAINUNJUST: Find a set of litsets that covers all bodies of rules with head p. Algorithm 3: UNJUSTCOVER |
| Open Source Code | Yes | ALPHA is freely available at: https://github.com/alpha-asp/Alpha |
| Open Datasets | Yes | The instances used for benchmarking are available at https://dtai.cs.kuleuven.be/krr/experiments/alpha_justifications.zip. |
| Dataset Splits | No | The paper does not specify distinct training, validation, or test splits for its benchmarks, which are problem instances used for evaluation. |
| Hardware Specification | Yes | The benchmarks were run on a cluster of Linux machines with Intel Xeon E5-2680 v3 CPUs. |
| Software Dependencies | No | The paper mentions 'ALPHA4' and 'CLINGO' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | Each benchmark was given 300 seconds and 8GB of memory on a single core of the cluster. Every run requested 10 answer sets and if a problem admits random instances, the reported run times are an average over 10 different random inputs while for other problems it is the average over 5 runs on the same instance. |