Enriching Ontology-based Data Access with Provenance
Authors: Diego Calvanese, Davide Lanti, Ana Ozaki, Rafael Penaloza, Guohui Xiao
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We implement Task (ii) in a state-of-the-art OBDA system and show the practical feasibility of the approach through an extensive evaluation against two popular benchmarks. To evaluate the feasibility of our approach, we implemented a prototype system (Onto Prov) that extends the state-of-the-art OBDA system Ontop [Calvanese et al., 2017] with the support for provenance. (...) We compare Ontop v3.0.0-beta-3 and Onto Prov over the BSBM [Bizer and Schultz, 2009] and the NPD [Lanti et al., 2015] benchmarks. |
| Researcher Affiliation | Academia | 1KRDB Research Centre, Free University of Bozen-Bolzano, Italy 2University of Milano-Bicocca, Italy |
| Pseudocode | Yes | Algorithm 1 Perfect Ref (...) Algorithm 2 Compute Prov |
| Open Source Code | No | The paper mentions implementing a prototype system (Onto Prov) that extends Ontop, but it does not provide an explicit statement or link for the open-sourcing of Onto Prov's code. |
| Open Datasets | Yes | We compare Ontop v3.0.0-beta-3 and Onto Prov over the BSBM [Bizer and Schultz, 2009] and the NPD [Lanti et al., 2015] benchmarks. |
| Dataset Splits | No | The paper mentions dataset sizes (e.g., '10k and 1M products', 'NPD10, which is 10 times the size of NPD') but does not specify training, validation, or test dataset splits (e.g., percentages, sample counts, or explicit split methodologies). |
| Hardware Specification | Yes | Experiments were run on a server with 2 Intel Xeon X5690 Processors (24 logical cores at 3.47 GHz), 106 GB of RAM and five 1 TB 15K RPM HDs. |
| Software Dependencies | Yes | We compare Ontop v3.0.0-beta-3 and Onto Prov over the BSBM [Bizer and Schultz, 2009] and the NPD [Lanti et al., 2015] benchmarks. As RDBMS we have used Postgre SQL 11.2. |
| Experiment Setup | No | The paper discusses aspects of the evaluation setup, such as disabling optimizations and instantiating queries, but it does not provide specific experimental setup details like hyperparameter values, training configurations (e.g., learning rate, batch size, epochs), or optimizer settings. |