Improved Knowledge Graph Embedding Using Background Taxonomic Information

Authors: Bahare Fatemi, Siamak Ravanbakhsh, David Poole3526-3533

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on public knowledge graphs show that despite its simplicity our approach is surprisingly effective. The objective of our empirical evaluations is two-fold: First, we want to see the practical implication of non-negativity constraints in terms of effectiveness of training and the quality of final results. Second, and more importantly, we would like to evaluate the practical benefit of incorporating prior knowledge in the form of subsumptions in sparse data regimes.
Researcher Affiliation Academia Bahare Fatemi, Siamak Ravanbakhsh, David Poole {bfatemi, siamakx, poole}@cs.ubc.ca
Pseudocode No The paper describes methods through text and mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not include any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes Datasets: We conducted experiments on four standard benchmarks: WN18, FB15K, Sport and Location. WN18 is a subset of WORDNET (Miller 1995) and FB15K is a subset of FREEBASE (Bollacker et al. 2008). Sport and Location datasets are introduced by (Wang et al. 2015), who created them using NELL (Mitchell et al. 2015).
Dataset Splits Yes For evaluation on WN18, FB15K, we split the existing triples in KG into the same train, validation, and test sets using the same split as (Bordes et al. 2013). Evaluation Metrics: To evaluate different KG completion methods we need to use a train N and test T split, where N T = K.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud computing specifications) used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies (e.g., programming languages, libraries, or solvers) with version numbers.
Experiment Setup No The paper mentions high-level training strategies like L2-regularization and stochastic optimization, but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size), model initialization, or optimizer settings.