Computing Approximate Query Answers over Inconsistent Knowledge Bases

Authors: Sergio Greco, Cristian Molinaro, Irina Trubitsyna

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We show that consistent query answering in our framework is intractable (co NP-complete). In light of this result, we develop a polynomial time approximation algorithm for computing a sound (but possibly incomplete) set of consistent query answers. and We show that consistent query answering in our framework is co NP-complete (data complexity). In light of this, we leverage universal repairs and provenance information to develop an approximation algorithm that provides a sound (but possibly incomplete) set of consistent query answers in polynomial time.
Researcher Affiliation Academia Sergio Greco, Cristian Molinaro, Irina Trubitsyna University of Calabria, Italy {greco,cmolinaro,trubitsyna}@dimes.unical.it
Pseudocode No The paper describes algorithmic steps and definitions, for instance, "Definition 5 (Universal repair step)", but these are presented as definitions or prose rather than structured pseudocode or an algorithm block.
Open Source Code No The paper does not contain any statement about releasing source code for the methodology, nor does it provide links to a code repository.
Open Datasets No The paper does not mention using any datasets for training or empirical evaluation. The examples used (e.g., Example 1) are illustrative rather than actual experimental data.
Dataset Splits No The paper does not discuss experimental validation using data splits (train/validation/test).
Hardware Specification No The paper does not describe any experimental setup or mention specific hardware used for computations.
Software Dependencies No The paper does not mention specific software dependencies with version numbers, as it primarily focuses on theoretical and algorithmic contributions rather than implementation details.
Experiment Setup No The paper does not provide details about an experimental setup, such as hyperparameters or system-level training settings.