Adversarial Explanations for Knowledge Graph Embeddings

Authors: Patrick Betz, Christian Meilicke, Heiner Stuckenschmidt

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We report results on par with state-of-the-art white-box attack methods that additionally require full access to the model architecture, the learned embeddings, and the loss functions. This is a surprising result which indicates that knowledge graph embedding models can partly be explained post hoc with the help of symbolic methods.
Researcher Affiliation Academia Patrick Betz , Christian Meilicke and Heiner Stuckenschmidt University of Mannheim, Research Group Data and Web Science {patrick, christian, heiner}@informatik.uni-mannheim.de
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It provides definitions but not algorithmic steps.
Open Source Code Yes Further details about KGE training and the code for running all experiments can be found in the supplementary material.
Open Datasets Yes We use the KGE models Compl Ex [Trouillon et al., 2016], Dist Mult [Yang et al., 2015] and Conv E [Dettmers et al., 2018] and the same datasets as [Bhardwaj et al., 2021], i.e., we use the common benchmarks WN18RR and FB15k-237.
Dataset Splits Yes Datasets are usually split into training, validation and test sets where evaluation takes place by forming queries as described above for all the triples in the test set.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes We have set the time available for Any BURL to learn the rule set to 100 seconds.