ExpressivE: A Spatio-Functional Embedding For Knowledge Graph Completion
Authors: Aleksandar Pavlović, Emanuel Sallinger
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on standard KGC benchmarks reveal that Expressiv E is competitive with state-of-the-art KGEs and even significantly outperforms them on WN18RR. |
| Researcher Affiliation | Academia | Aleksandar Pavlovi c & Emanuel Sallinger Research Unit of Databases and Artificial Intelligence TU Wien Vienna, Austria {aleksandar.pavlovic,emanuel.sallinger}@tuwien.ac.at |
| Pseudocode | No | The paper describes its methods mathematically and textually but does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | We have made our code publicly available in a Git Hub repository1. It contains, in addition to the code of Expressiv E, a setup file to install the necessary libraries and a Read Me.md file containing library versions and running instructions to facilitate the reproducibility of our results. 1https://github.com/Aleks Vap/Expressiv E |
| Open Datasets | Yes | We evaluate Expressiv E on the two standard KGC benchmarks WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova & Chen, 2015) |
| Dataset Splits | Yes | Table 13: Benchmark dataset characteristics. Dataset |E| |R| Training Triples Validation Triples Testing Triples... Both WN18RR and FB15k-237 provide training, testing, and validation splits, which were directly used in our experiments. |
| Hardware Specification | Yes | Each model was trained and evaluated on one of 4 Ge Force RTX 2080 GPUs of our internal cluster. |
| Software Dependencies | Yes | We have implemented Expressiv E in Py KEEN 1.7 (Ali et al., 2021)... It contains, in addition to the code of Expressiv E, a setup file to install the necessary libraries and a Read Me.md file containing library versions and running instructions to facilitate the reproducibility of our results. |
| Experiment Setup | Yes | We performed hyperparameter tuning over the learning rate λ, embedding dimensionality d, number of negative samples neg, loss margin γ, adversarial temperature α, and minimal denominator Dmin. ... The best performing hyperparameters for Expressiv E on each benchmark dataset are listed in Table 12. |