Incremental Update of Datalog Materialisation: the Backward/Forward Algorithm
Authors: Boris Motik, Yavor Nenov, Robert Piro, Ian Horrocks
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our evaluation, the B/F algorithm was several orders of magnitude more efficient than the DRed algorithm on some inputs, and it was never significantly less efficient. |
| Researcher Affiliation | Academia | Boris Motik, Yavor Nenov, Robert Piro and Ian Horrocks Department of Computer Science, Oxford University Oxford, United Kingdom firstname.lastname@cs.ox.ac.uk |
| Pseudocode | Yes | Algorithm 1 B/F delete() Algorithm 2 check Provability(F) Algorithm 3 saturate() |
| Open Source Code | Yes | Our system and datasets are available online.1 https://krr-nas.cs.ox.ac.uk/2015/AAAI/RDFox/Incremental |
| Open Datasets | Yes | LUBM (Guo, Pan, and Heflin 2005) is a well-known RDF benchmark. ... UOBM (Ma et al. 2006) ... Claros |
| Dataset Splits | No | The paper discusses evaluating an incremental update algorithm on facts to be deleted from a materialization, rather than specifying training/validation/test dataset splits in the context of supervised learning for model reproduction. It mentions selecting "subsets of E eq of various size" for deletion, but this is not a traditional dataset split. |
| Hardware Specification | Yes | We used a server with 256 GB of RAM and two Intel Xeon E5-2670 CPUs at 2.60GHz running Fedora release 20, kernel version 3.15.10-200.fc20.x86 64. |
| Software Dependencies | No | The paper mentions "RDFox" and the operating system "Fedora release 20, kernel version 3.15.10-200.fc20.x86 64" but does not specify version numbers for any other software libraries, frameworks, or solvers that would be needed for replication. |
| Experiment Setup | No | The paper describes the general test setting, including dataset generation and the optimization of DRed, but it does not provide specific experimental setup details such as hyperparameter values, model initialization, or specific training configurations in the main text. |