Representation Learning of Knowledge Graphs with Entity Descriptions
Authors: Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, Maosong Sun
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on two tasks, including knowledge graph completion and entity classification. Experimental results on real-world datasets show that, our method outperforms other baselines on the two tasks, especially under the zero-shot setting, which indicates that our method is capable of building representations for novel entities according to their descriptions. |
| Researcher Affiliation | Academia | 1 Department of Computer Science and Technology, 2 State Key Lab on Intelligent Technology and Systems, National Lab for Information Science and Technology, Tsinghua University, Beijing, China 3 Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou 221009 China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code of this paper can be obtained from https://github.com/xrb92/DKRL. |
| Open Datasets | Yes | In this paper, we adopt FB15K (Bordes et al. 2013), a dataset extracted from a typical large-scale KG Freebase (Bollacker et al. 2008), to evaluate the DKRL model on knowledge graph completion and entity classification. |
| Dataset Splits | Yes | FB20K shares the same training and validation set with FB15K. The statistics of datasets are listed in Table 1. Table 1: Statistics of data sets Dataset #Rel #Ent #Train #Valid #Test FB15K 1,341 14,904 472,860 48,991 57,803 |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. It only discusses the experimental setup at a software/parameter level. |
| Software Dependencies | No | The paper mentions using "word embeddings trained on Wikipedia by word2vec (Mikolov et al. 2013)" but does not provide specific version numbers for software libraries, frameworks (e.g., TensorFlow, PyTorch), or other ancillary software components used for implementation or experimentation. |
| Experiment Setup | Yes | We train those model with entity/relation dimension n in {50, 80, 100}. Following (Bordes et al. 2013), we use a fixed learning rate λ among {0.0005, 0.001, 0.002}, and margin γ among {0.5, 1.0, 1.5, 2.0}. The optimal configurations of CNN are : λ = 0.001, γ = 1.0, k = 2, n = 100, nw = 100, nf = 100. |