Data Quality in Ontology-based Data Access: The Case of Consistency

Authors: Marco Console, Maurizio Lenzerini

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We define a general framework for data consistency in OBDA, and present algorithms and complexity analysis for several relevant tasks related to the problem of checking data quality under this dimension, both at the extensional level (content of the data sources), and at the intensional level (schema of the data sources).
Researcher Affiliation Academia Marco Console, Maurizio Lenzerini Dipartimento di Ingegneria Informatica, Automatica e Gestionale Antonio Ruberti Sapienza Universit a di Roma, Roma, Italy lastname@dis.uniroma1.it
Pseudocode No The paper describes algorithms and their properties (e.g., the chase algorithm) but does not provide any structured pseudocode blocks or figures.
Open Source Code No The paper does not provide any information or links regarding the availability of open-source code for the described methodology.
Open Datasets No This is a theoretical paper and does not involve empirical experiments with datasets, thus no information on dataset availability for training.
Dataset Splits No This is a theoretical paper and does not involve empirical experiments with datasets, thus no information on training/test/validation splits.
Hardware Specification No This is a theoretical paper focused on algorithms and complexity analysis, and as such, it does not mention any hardware specifications used for experiments.
Software Dependencies No This is a theoretical paper focused on algorithms and complexity analysis, and as such, it does not list any specific software dependencies with version numbers.
Experiment Setup No This is a theoretical paper and does not describe any empirical experimental setup, hyperparameters, or training configurations.