Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Computing Concept Referring Expressions for Queries on Horn ALC Ontologies
Authors: Moritz Illich, Birte Glimm
IJCAI 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The evaluation of our prototypical implementation shows that computing CREs for the most general concept ( ) can be done in less than one minute for ontologies with thousands of individuals and concepts. In Section 4, we show the results of our empirical evaluation |
| Researcher Affiliation | Academia | Moritz Illich , Birte Glimm Institute of Artificial Intelligence, Ulm University, Germany EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 Answering generalized instance queries, Algorithm 2 Computing CREs for a base individual |
| Open Source Code | Yes | A prototypical Java implementation of our algorithm is available online3 https://github.com/M-Illich/Computing-CREs |
| Open Datasets | Yes | the implementation was tested on different ontologies listed in Table 1 with codinteraction-A and the separate ore ont 2608/4516/3313 being part of the ORE 2015 Reasoner Competition Corpus [Matentzoglu and Parsia, 2015], while HAO (v2021-03-05), VO (v1.1.171) and DTO (v1.1.1) were taken from Bio Portal4. https://bioportal.bioontology.org/ontologies |
| Dataset Splits | No | The paper evaluates an algorithm for computing concept referring expressions on ontologies. It does not describe experiments that use traditional training, validation, and test splits common in machine learning contexts. The ontologies themselves serve as the data for querying. |
| Hardware Specification | Yes | Hermi T as reasoner, based on an AMD Ryzen 7 3700X 3.59 GHz processor with 16 GB RAM on Windows 10 (64-Bit). |
| Software Dependencies | Yes | Hermi T5 (v1.3.8) and JFact6 (v5.0.3). |
| Experiment Setup | No | The paper describes optimizations for the algorithm and lists the ontologies and reasoners used for evaluation. However, it does not specify concrete experimental setup details such as hyperparameters, learning rates, batch sizes, or other system-level training configurations, as these are not applicable to the type of algorithm and evaluation presented (which is querying ontologies, not training a machine learning model). |