Knowledge Graph Representation with Jointly Structural and Textual Encoding

Authors: Jiacheng Xu, Xipeng Qiu, Kan Chen, Xuanjing Huang

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our models outperform baseline and obtain state-of-the-art results on link prediction and triplet classification tasks.
Researcher Affiliation Academia Jiacheng Xu Xipeng Qiu Kan Chen Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China School of Computer Science, Fudan University, Shanghai, China Software School, Fudan University, Shanghai, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks that are clearly labeled.
Open Source Code No The paper provides a GitHub link (https://github.com/xrb92/DKRL) but explicitly states it is for the datasets ('since their text descriptions are easily publicly available'), not for the authors' source code for their methodology.
Open Datasets Yes We use two popular knowledge bases: Word Net [Miller, 1995] and Freebase [Bollacker et al., 2008] in this paper. Specifically, we use WN18 (a subset of Word Net) [Bordes et al., 2014] and FB15K (a subset of Freebase) [Bordes et al., 2013] since their text descriptions are easily publicly available.1 https://github.com/xrb92/DKRL
Dataset Splits Yes Table 1 lists statistics of the two datasets. Dataset ... #Train #Valid #Test FB15k 483,142 50,000 59,071 WN18 141,442 5,000 5,000
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup Yes We select the margin γ among {1, 2, 10}, the embedding dimension d among {20, 50, 100}, the regularization η among {0, 1E 5, 1E 6}, two learning rates λs and λt among {0.001, 0.01, 0.05, 0.1} to learn the parameters of structure and text encoding. The dissimilarity measure is set either to the L1 or L2 distance. The best configurations obtained by validation set are chosen for the evaluation. ... The final optimal configurations are: γ = 2, d = 20, η = 1E 5, λs = 0.01, λt = 0.1, and L1 distance on WN18; γ = 2, d = 100, η = 1E 5, λs = 0.01, λt = 0.05, and L1 distance on FB15K.