Iterative Entity Alignment via Joint Knowledge Embeddings
Authors: Hao Zhu, Ruobing Xie, Zhiyuan Liu, Maosong Sun
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment results on realworld datasets show that, as compared to baselines, our method achieves significant improvements on entity alignment |
| Researcher Affiliation | Academia | 1 Department of Computer Science and Technology, State Key Lab on Intelligent Technology and Systems, National Lab for Information Science and Technology, Tsinghua University, Beijing, China 2 Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou 221009 China |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code can be obtained from https://github.com/thunlp/IEAJKE. |
| Open Datasets | Yes | In this paper, we build four datasets based on FB15K [Bordes et al., 2013] originally extracted from Freebase [Bordes et al., 2013] |
| Dataset Splits | Yes | The alignments of other entities are used as the test set and validation set. ... Table 1: Statistics of DFB-1, DFB-2 and DFB-3 [shows #Valid column] |
| Hardware Specification | No | The paper does not specify any particular hardware used for running the experiments (e.g., GPU models, CPU types, or memory). |
| Software Dependencies | No | The paper mentions optimizers (SGD) and specific measures (L1-norm) but does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks like Python, PyTorch, or TensorFlow versions). |
| Experiment Setup | Yes | As for hyper-parameters, we select margin γ among {0.5, 1.0, 1.5, 2.0}. We set the dimensions of entity and relation embeddings to be the same n. We set a fixed learning rate λ = 0.001 following [Bordes et al., 2013; Lin et al., 2015]. For Hard Alignment and Soft Alignment, we select θ among {0.5, 1.0, 2.0, 3.0, 4.0}. For Soft Alignment, we select k among {0.5, 1.0, 2}. For a fair comparison, all models are trained under the same dimension n = 50 and the same amount of epochs 3000. The optimal configurations of our models are: γ = 1.0, k = 1.0, B = {1000, 1500, 2000, 2500}, C = {5000, 6000, 7000, 8000}, θ = 1.0 for Hard Alignment and θ = 3.0 for Soft Alignment. |