Knowledge Representation Learning with Entities, Attributes and Relations

Authors: Yankai Lin, Zhiyuan Liu, Maosong Sun

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiment results show that, by special modeling of attribute, KR-EAR can significantly outperform state-of-the-art KR models in prediction of entities, attributes and relations. The source code of this paper can be obtained from https://github.com/thunlp/KR-EAR.
Researcher Affiliation Academia 1 Department of Computer Science and Technology, State Key Lab on Intelligent Technology and Systems, National Lab for Information Science and Technology, Tsinghua University, Beijing, China 2 Jiangsu Collaborative Innovation Center for Language Competence, Jiangsu, China
Pseudocode No The paper describes the model components and their functions but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The source code of this paper can be obtained from https://github.com/thunlp/KR-EAR.
Open Datasets Yes We evaluate our model on a typical large-scale KG Freebase. Freebase [Bollacker et al., 2008] is a large-scale and growing collaborative KG consisting of data composed mainly by its community members, which provides general facts of the real world. Finally, we build a dataset named as FB24k, and we randomly separate datas into training and testing sets.
Dataset Splits Yes We tune our models using five-fold validation on the training set.
Hardware Specification No The paper mentions 'The running time of per iteration is 14s for Trans E and 297s for Trans R in single thread' but provides no specific details about the CPU, GPU, or other hardware used for the experiments.
Software Dependencies No The paper does not mention any specific software dependencies or their version numbers (e.g., Python version, library versions, frameworks).
Experiment Setup Yes The best configurations are λ = 0.001, γ = 0.1, k = 100, b1 = 7, b2 = 2, c1 = 10, c2 = 1 and taking L1 as dissimilarity metric. For training, we set the iteration number over all the training triples as 1000.