Query Embedding on Hyper-Relational Knowledge Graphs

Authors: Dimitrios Alivanistos, Max Berrendorf, Michael Cochez, Mikhail Galkin

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we empirically evaluate the performance of QA over hyper-relational KGs. We design experiments to tackle the following research questions: RQ1) Does QA performance benefit from the use of qualifiers? RQ2) What are the generalization capabilities of our hyper-relational QA approach? RQ3) Does QA performance depend on the physical representation of a hyper-relational KG, i.e., reification?
Researcher Affiliation Collaboration Dimitrios Alivanistos1, 4, Max Berrendorf2, Michael Cochez1,4, and Mikhail Galkin3 1 Vrije Universiteit Amsterdam, 2 LMU Munich, 3 Mila, Mc Gill University, 4 Discovery Lab, Elsevier
Pseudocode No The paper describes the model and its components in detail but does not include an explicitly labeled pseudocode or algorithm block.
Open Source Code Yes STARQE implementation: https://github.com/Dimitris Alivas/Star QE
Open Datasets Yes WD50K (Galkin et al., 2020) comprised of Wikidata statements, with varying numbers of qualifiers. 5This dataset is available under CC BY 4.0.
Dataset Splits Yes This dataset already provides with train, validation and test splits, each containing a selection of hyper-relational triples. It is publicly available by the authors, in CSV format. ... We utilize 3 named graphs: triple train, triple validation, and triple test, to prevent validation and test set leakage. ... For the training set, all statements in a query come from the triple train set (only). For the validation set, one statement comes from the triple validation and the other edge(s) come from either triple train or triple validation, and For the test set, one statement comes from triple test, and the other statement from any of triple train, triple validation, and triple test
Hardware Specification Yes All experiments are executed on machines with single GTX 1080 Ti or RTX 2080 Ti GPU and 12 or 32 CPUs.
Software Dependencies No The paper mentions implementing STARQE and baselines in PyTorch (Paszke et al., 2019), but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes We provide our chosen hyper-parameters after performing hyper-parameter optimisation in Table 6 and detailed results including standard deviation across five runs with different random seeds in Tables 7 and 8.