Understanding and Improving Knowledge Graph Embedding for Entity Alignment

Authors: Lingbing Guo, Qiang Zhang, Zequn Sun, Mingyang Chen, Wei Hu, Huajun Chen

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we empirically verify the effectiveness of Neo EA by a series of experiments. The main results on V1 datasets are shown in Table 1.
Researcher Affiliation Collaboration 1College of Computer Science and Technology, Zhejiang University 2ZJU-Hangzhou Global Scientific and Technological Innovation Center 3Alibaba-Zhejiang University Joint Reseach Institute of Frontier Technologies 4State Key Laboratory for Novel Software Technology, Nanjing University, China.
Pseudocode Yes Algorithm 1 Neo EA
Open Source Code Yes https://github.com/guolingbing/Neo EA
Open Datasets Yes We used the latest benchmark provided by Open EA (Sun et al., 2020c), which consists of four sub-datasets with two density settings. Specifically, D-W , D-Y denote DBpedia (Auer et al., 2007)-Wiki Data (Vrandeˇci c & Kr otzsch, 2014) , DBpedia YAGO (Fabian et al., 2007) , respectively. EN-DE and EN-FR denote two cross-lingual datasets, both of which are sampled from DBpedia.
Dataset Splits Yes Table 1: Results on V1 datasets (5-fold cross-validation).
Hardware Specification Yes We used a single TITAN RTX for training, and SEA (the fastest model) as the basic EEA model.
Software Dependencies No The paper mentions several models and frameworks like Open EA, TransE, ConvE, but does not provide specific version numbers for software dependencies used in their implementation.
Experiment Setup No We modified only the initialization of the original project and kept the optimal hyper-parameter settings in Open EA to ensure a fair comparison. This statement indicates that hyper-parameters were used, but they are not explicitly listed within the paper.