Does William Shakespeare REALLY Write Hamlet? Knowledge Representation Learning With Confidence
Authors: Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In experiments, We evaluate our models on knowledge graph noise detection, knowledge graph completion and triple classification. Experimental results demonstrate that our confidence-aware models achieve significant and consistent improvements on all tasks, which confirms the capability of CKRL modeling confidence with structural information in both KG noise detection and knowledge representation learning. |
| Researcher Affiliation | Collaboration | Ruobing Xie,1,2 Zhiyuan Liu,1 Fen Lin,2 Leyu Lin2 1 Department of Computer Science and Technology, State Key Lab on Intelligent Technology and Systems, National Lab for Information Science and Technology, Tsinghua University, China 2 Search Product Center, We Chat Search Application Department, Tencent, China. |
| Pseudocode | No | The paper describes methods using text and equations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code and dataset of this paper can be obtained from https://github.com/thunlp/CKRL. |
| Open Datasets | Yes | In this paper, we evaluate our CKRL model based on FB15K (Bordes et al. 2013), which is a typical benchmark knowledge graph extracted from Freebase (Bollacker et al. 2008). |
| Dataset Splits | Yes | All three noisy datasets share the same entities, relations, validation and test sets with FB15K, with all generated negative triples fused into the original training set of FB15K. The statistics are listed in Table 1. The optimal configurations of our models are: δ = 0.001, α = 0.9, β = 0.0001, which are optimized on the validation set. |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, memory amounts, or detailed computer specifications) used for running the experiments are provided. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | Yes | We train our CKRL model using mini-batch SGD with the margin γ empirically set as 1.0. We select the overall learning rate δ among {0.0005, 0.001, 0.002}, which is fixed during training. For local triple confidence, we select the descend controller α among {0.5, 0.7, 0.9} and the ascend controller β among {0.0001, 0.0005, 0.001}. For prior path confidence, the smoothing ϵ is empirically set as 0.01. The optimal configurations of our models are: δ = 0.001, α = 0.9, β = 0.0001, which are optimized on the validation set. We also evaluate various combination weights λi when we calculate the overall triple confidence based on the three proposed methods. We select a unified weighting strategy for different evaluation tasks and datasets according to their overall performances to show the robustness of our CKRL models. Specifically, for CKRL (LT+PP), we select λ1 = 0.9 and λ2 = 0.1, while for CKRL (LT+PP+AP), we select λ1 = 1.5, λ2 = 0.1 and λ2 = 0.4. For fair comparisons, the dimensions of both entity and relation embeddings in all models are equally set to be 50. |