Query Answering for Existential Rules via Efficient Datalog Rewriting
Authors: Zhe Wang, Peng Xiao, Kewen Wang, Zhiqiang Zhuang, Hai Wan
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We implemented a prototype system Drewer, and experiments show that it is able to handle a wide range of benchmarks in the literature. Moreover, Drewer shows superior or comparable performance over state-of-the-art systems on both the compactness of rewriting and the efficiency of query answering. |
| Researcher Affiliation | Academia | 1Griffith University, Australia 2Tianjin University, China 3Sun Yat-sen University, China |
| Pseudocode | No | The paper describes an algorithm but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We have implemented a prototype system, Drewer (Datalog REWriting for Existential Rules), with our piece unification module adapted from the first-order rewriting system Graal2 [K onig et al., 2015a], and we deployed VLog3 as our datalog engine. All experiments were performed on a laptop with a processor at 2.2 GHz and 8GB of RAM. The system and experiment benchmarks can be found at https://www.ict.griffith.edu.au/aist/Drewer. |
| Open Datasets | Yes | We evaluated ontologies including the DL-Lite versions of LUBM, Open GALEN2, OBOprotein and RS. RS is from [Bienvenu et al., 2017] with a simple ontology but specially crafted long queries (with up to 15 atoms), which is a known challenge to existing rewriting-based systems. Reactome and Uniprot are in OWL2, and we used the existential rule fragments of them which are more expressive than DL-Lite. The ontologies were converted into existential rules using a transformation tool provided by Graal. DEEP200/300, STB-128, and ONT-256 are from Chase Bench [Benedikt et al., 2017], a benchmark for chase-based reasoning systems. |
| Dataset Splits | No | The paper describes the datasets used and the number of queries evaluated, but does not provide specific training/validation/test dataset splits (percentages or counts) or a detailed splitting methodology for the data itself. |
| Hardware Specification | Yes | All experiments were performed on a laptop with a processor at 2.2 GHz and 8GB of RAM. |
| Software Dependencies | Yes | We have implemented a prototype system, Drewer (Datalog REWriting for Existential Rules), with our piece unification module adapted from the first-order rewriting system Graal2 [K onig et al., 2015a], and we deployed VLog3 as our datalog engine. |
| Experiment Setup | No | The paper describes the experimental comparison and benchmarks, but does not provide specific experimental setup details such as hyperparameter values, learning rates, or batch sizes for its proposed system or the models evaluated. |