Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs

Authors: Hongyu Ren, Jure Leskovec

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform experiments on standard KG datasets and compare BETAE to prior approaches [9, 10] that can only handle EPFO queries. Experiments show that our model BETAE is able to achieve state-of-the-art performance in handling arbitrary conjunctive queries (including , ) with a relative increase of the accuracy by up to 25.4%.
Researcher Affiliation Academia Hongyu Ren Stanford University hyren@cs.stanford.edu Jure Leskovec Stanford University jure@cs.stanford.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It describes methods using mathematical formulas and prose.
Open Source Code Yes Project website with data and code can be found at http://snap.stanford.edu/betae.
Open Datasets Yes We use three standard KGs with official training/validation/test edge splits, FB15k [4], FB15k-237 [35] and NELL995 [27] and follow [10] for the preprocessing.
Dataset Splits Yes We use three standard KGs with official training/validation/test edge splits, FB15k [4], FB15k-237 [35] and NELL995 [27] and follow [10] for the preprocessing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact CPU/GPU models, memory, or cloud computing specifications) used for running its experiments.
Software Dependencies No The paper mentions that hyperparameters, architectures, and more details are in Appendix D, which is not provided. No specific software names with version numbers are mentioned in the main text provided.
Experiment Setup Yes We ran each method for 3 different random seeds after finetuning the hyperparameters. We list the hyperparameters, architectures and more details in Appendix D.