Learning Representations of Bi-level Knowledge Graphs for Reasoning beyond Link Prediction

Authors: Chanyoung Chung, Joyce Jiyoung Whang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that Bi VE significantly outperforms all other methods in the two new tasks and the typical base-level link prediction in real-world bi-level knowledge graphs.
Researcher Affiliation Academia School of Computing, KAIST {chanyoung.chung, jjwhang}@kaist.ac.kr
Pseudocode No The paper describes the methods in narrative text and mathematical formulas but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Our datasets and codes are available at https://github.com/bdi-lab/Bi VE.
Open Datasets Yes Based on well-known knowledge graphs, FB15K237 (Toutanova and Chen 2015) and DB15K (Garcia-Duran and Niepert 2018), we create three real-world bi-level knowledge graphs named FBH, FBHE, and DBHE.
Dataset Splits Yes We split E and H into training, validation, and test sets with a ratio of 8:1:1.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instances) used for running the experiments.
Software Dependencies No The paper mentions using specific knowledge graph embedding implementations like Quat E, Bi QUE, and Open KE for baselines, but does not specify software versions for programming languages, libraries, or frameworks (e.g., Python version, PyTorch version, CUDA version).
Experiment Setup Yes Given the maximum length of a random walk path L, we repeat the random walks by varying the length l = 2, , L and repeat the random walks n times for every l. In our experiments, we set L=3 and n=50,000,000. We select the pairs of (pk, r) that satisfies c(pk, r) τ where we set τ = 0.7. We set d = 200 and ˆd = 200. We repeat experiments ten times for each method and report the average of each metric.