End-to-end Differentiable Proving
Authors: Tim Rocktäschel, Sebastian Riedel
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that this architecture outperforms Compl Ex, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules. |
| Researcher Affiliation | Collaboration | Tim Rocktäschel University of Oxford tim.rocktaschel@cs.ox.ac.uk Sebastian Riedel University College London & Bloomsbury AI s.riedel@cs.ucl.ac.uk |
| Pseudocode | Yes | We use pseudocode in style of a functional programming language to define the behavior of modules and auxiliary functions. |
| Open Source Code | No | No explicit statement or link providing concrete access to source code for the methodology described in this paper was found. The paper does not mention releasing the code, nor does it provide a repository link. |
| Open Datasets | Yes | Consistent with previous work, we carry out experiments on four benchmark KBs... The Countries KB is a dataset introduced by [35]... We use the Nations, Alyawarra kinship (Kinship) and Unified Medical Language System (UMLS) KBs from [10]. |
| Dataset Splits | Yes | We follow [36] and split countries randomly into a training set of 204 countries (train), a development set of 20 countries (dev), and a test set of 20 countries (test)... We split every KB into 80% training facts, 10% development facts and 10% test facts. |
| Hardware Specification | No | No specific hardware details (such as exact GPU/CPU models, processor types, or memory amounts) used for running the experiments are provided in the paper. |
| Software Dependencies | No | The paper mentions 'TensorFlow [69]' in the acknowledgements, but does not specify a version number. No other specific software dependencies with version numbers are listed for replicating the experiment. |
| Experiment Setup | No | Training details, including hyperparameters and rule templates, can be found in Appendix E. (Explanation: While the paper states that training details and hyperparameters exist and can be found in Appendix E, the specific values and configurations themselves are not presented in the main text provided.) |