Learning Query Inseparable εℒℋ Ontologies

Authors: Ana Ozaki, Cosimo Persia, Andrea Mazzullo2959-2966

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We investigate the complexity of learning query inseparable ELH ontologies in a variant of Angluin s exact learning model. Given a fixed data instance A and a query language Q, we are interested in computing an ontology H that entails the same queries as a target ontology T on A , that is, H and T are inseparable w.r.t. A and Q. The learner is allowed to pose two kinds of questions. The first is Does (T , A) |= q?, with A an arbitrary data instance and q and query in Q. An oracle replies this question with yes or no . In the second, the learner asks Are H and T inseparable w.r.t. A and Q?. If so, the learning process finishes, otherwise, the learner receives (A , q) with q Q, (T , A ) |= q and (H, A ) |= q (or vice-versa). Then, we analyse conditions in which query inseparability is preserved if A changes. Finally, we consider the PAC learning model and a setting where the algorithms learn from a batch of classified data, limiting interactions with the oracles.
Researcher Affiliation Academia Ana Ozaki, Cosimo Persia, Andrea Mazzullo KRDB Research Centre, Free University of Bozen-Bolzano
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks. It describes algorithms conceptually but does not present their steps in a structured format.
Open Source Code No The paper does not provide a link or explicit statement about releasing source code for the methodology described in this paper. It mentions that 'Omitted proofs are available at https://arxiv.org/abs/1911.07229' which is not code.
Open Datasets No The paper does not mention using any specific publicly available datasets for empirical training or evaluation. The concept of 'ABox' and 'examples' are used in a theoretical sense within the learning model.
Dataset Splits No The paper does not specify any dataset splits (e.g., training, validation, test percentages or counts) as it is a theoretical work and does not involve empirical data splitting.
Hardware Specification No The paper is theoretical and does not involve empirical experiments, thus no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not involve empirical experiments requiring specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on complexity analysis of learning models, thus it does not provide any experimental setup details such as hyperparameters or training configurations.