An Open-World Extension to Knowledge Graph Completion Models
Authors: Haseeb Shah, Johannes Villmow, Adrian Ulges, Ulrich Schwanecke, Faisal Shafait3044-3051
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In experiments on several datasets including FB20k, DBPedia50k and our new dataset FB15k-237-OWE, we demonstrate competitive results. |
| Researcher Affiliation | Academia | 1National University of Science and Technology, Pakistan 2Rhein Main University of Applied Sciences, Germany |
| Pseudocode | No | The paper describes its approach conceptually and mathematically in text, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and the new FB15k-237-OWE dataset are available online1. 1https://github.com/haseebs/OWE |
| Open Datasets | Yes | In experiments on several datasets including FB20k, DBPedia50k and our new dataset FB15k-237-OWE, we demonstrate competitive results. The code and the new FB15k-237-OWE dataset are available online1. 1https://github.com/haseebs/OWE |
| Dataset Splits | Yes | The dataset also contains two validation sets: A closed-world one (with random triples picked from the training set) and an openworld one (with random triples picked from the test set). We manually optimize all hyperparameters on the validation set. Due to the lack of an openworld validation set on FB20k, we randomly sampled 10% of the test triples as a validation set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | For training Trans E and Dist Mult, we use the Open KE framework2. The paper mentions software frameworks like Open KE and the Adam optimizer, but does not provide specific version numbers for these or other key software components, which is necessary for reproducibility. |
| Experiment Setup | Yes | For training the transformation Ψmap, we used the Adam optimizer with a learning rate of 10 3 and batch size of 128. For DBPedia50k we use a dropout of 0.5, while for FB20k and FB15k-237-OWE we use no dropout. The embedding used is the pretrained 300 dimensional Wikipedia2Vec embedding and the transformation used is affine unless stated otherwise. |