Answering Complex Queries in Knowledge Graphs with Bidirectional Sequence Encoders

Authors: Bhushan Kotnis, Carolin Lawrence, Mathias Niepert4968-4977

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We introduce two new challenging datasets for studying conjunctive query inference and conduct experiments on several benchmark datasets that demonstrate BIQE significantly outperforms state of the art baselines.
Researcher Affiliation Industry Bhushan Kotnis, Carolin Lawrence, Mathias Niepert NEC Laboratories Europe Heidlberg, Germany {bhushan.kotnis,carolin.lawrence,mathias.niepert}@neclab.eu
Pseudocode No The paper describes the model and process but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not include an unambiguous statement about releasing code for the described methodology or a direct link to a source-code repository.
Open Datasets Yes We also evaluate the proposed model on two of these datsets, namely, FB15K-237 and NELL-995. ... To address these shortcomings, we introduce two new challenging datasets based on popular KG completion benchmarks, namely FB15K-237 (Toutanova and Chen 2015) and WN18RR (Dettmers et al. 2018).
Dataset Splits Yes Table 1 describes the dataset statistics for FB15K-237-CQ and WN18RR-CQ. For the Paths dataset, the test and validation splits only contain paths while the training contains paths and triples.
Hardware Specification No The paper states "We use the standard BERT architecture as defined in (Devlin et al. 2019)" and refers to an appendix for training details, but it does not specify any particular hardware (GPU/CPU models, memory, etc.) used for running the experiments in the main text.
Software Dependencies No The paper mentions using "the standard BERT architecture", but does not provide specific version numbers for software dependencies such as libraries, frameworks, or programming languages.
Experiment Setup No The paper states "Due to space constraints we moved the training details and hyperparameter tuning to the appendix which can be found in (Kotnis, Lawrence, and Niepert 2020)", but does not provide specific experimental setup details like hyperparameter values or training configurations in the main text.