Bootstrapping Entity Alignment with Knowledge Graph Embedding
Authors: Zequn Sun, Wei Hu, Qingheng Zhang, Yuzhong Qu
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on real-world datasets showed that the proposed approach significantly outperformed the state-of-the-art embedding-based ones for entity alignment. |
| Researcher Affiliation | Academia | Zequn Sun, Wei Hu , Qingheng Zhang and Yuzhong Qu State Key Laboratory for Novel Software Technology, Nanjing University, China {zqsun, qhzhang}.nju@gmail.com, {whu, yzqu}@nju.edu.cn |
| Pseudocode | No | The paper includes mathematical equations and descriptions of processes, but no explicitly labeled pseudocode blocks or algorithms. |
| Open Source Code | Yes | Our source code, datasets and experimental results are available online1. 1https://github.com/nju-websoft/Boot EA |
| Open Datasets | Yes | DBP15K [Sun et al., 2017] contains three cross-lingual datasets built from the multilingual versions of DBpedia: DBPZH-EN (Chinese to English), DBPJA-EN (Japanese to English) and DBPFR-EN (French to English). Each dataset contains 15 thousand reference entity alignment. DWY100K contains two large-scale datasets extracted from DBpedia, Wikidata and YAGO3, denoted by DBPWD and DBP-YG. Each dataset has 100 thousand reference entity alignment. The extraction method followed that for DBP15K. ... Our source code, datasets and experimental results are available online1. 1https://github.com/nju-websoft/Boot EA |
| Dataset Splits | No | Following JAPE [Sun et al., 2017], we used 30% of the reference entity alignment as prior alignment and left the remaining as testing data. The paper specifies a train/test split but does not explicitly mention a separate validation set. |
| Hardware Specification | Yes | Our experiments were conducted on a personal workstation with an Intel Xeon E3 3.3 GHz CPU, a NVIDIA Ge Force GTX 1080 Ti GPU and 128 GB memory. |
| Software Dependencies | No | We used Tensorflow to develop our approach, called Boot EA. The paper mentions TensorFlow but does not specify its version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | For Boot EA, we used the configuration below: γ1 = 0.01, γ2 = 2.0, γ3 = 0.7, µ1 = 0.2 and µ2 = 0.1. Also, ϵ = 0.9 for DBP15K and ϵ = 0.98 for DWY100K. 10 negative triples were sampled for each positive triple. The learning rate was set to 0.01 and the training spent 500 epochs. One iteration of bootstrapping was conducted when training 10 epochs of embeddings. |