Ontology Materialization by Abstraction Refinement in Horn SHOIF
Authors: Birte Glimm, Yevgeny Kazakov, Trung-Kien Tran
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | An empirical evaluation demonstrates that, despite the new features, the abstractions are still significantly smaller than the original ontologies and the materialization can be computed efficiently. We implemented a prototype system Orar for full materialization of Horn SHOIF ontologies, evaluated Orar on popular ontologies, and compared it with other reasoners. Table 3 presents detailed information about the test ontologies and the experimental results. |
| Researcher Affiliation | Academia | Birte Glimm and Yevgeny Kazakov and Trung-Kien Tran Institute of Artificial Intelligence, University of Ulm, Germany <first name>.<last name>@uni-ulm.de |
| Pseudocode | No | The general algorithm for ontology reasoning using the abstraction refinement method can be summarized as follows: 1. Build a suitable abstraction of the original ontology; 2. Compute the entailments from the abstraction using a reasoner and transfer them to the original ontology using homomorphisms (Lemma 1); 3. Compute the deductive closure of the original ontology using some (light-weight) rules; 4. Repeat from Step 1 until no new entailments can be added to the original ontology. |
| Open Source Code | Yes | The test ontologies and our system are available online.1 [footnote 1: https://www.uni-ulm.de/en/in/ki/software/orar] |
| Open Datasets | Yes | For the popular benchmarks LUBM (Guo, Pan, and Heflin 2005) and UOBM (Ma et al. 2006), we use Ln and Un to denote the datasets for n universities respectively. |
| Dataset Splits | No | The paper evaluates the system on several real-world and benchmark ontologies, but there is no mention of training, validation, or test dataset splits. |
| Hardware Specification | Yes | All results were obtained using a compute server with two Intel Xeon E5-2660V3 processors and 512 GB RAM and a timeout of five hours. |
| Software Dependencies | Yes | We limit this comparison to the reasoners Konclude 0.6.2 (Steigmiller, Liebig, and Glimm 2014) and PAGOd A 2.0 (Zhou et al. 2015), which we found to perform best for our test ontologies |
| Experiment Setup | Yes | All results were obtained using a compute server with two Intel Xeon E5-2660V3 processors and 512 GB RAM and a timeout of five hours. |