Improving Cross-lingual Entity Alignment via Optimal Transport

Authors: Shichao Pei, Lu Yu, Xiangliang Zhang

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results show that our model consistently outperforms the state-of-the-arts with significant improvements on alignment accuracy.
Researcher Affiliation Academia Shichao Pei, Lu Yu and Xiangliang Zhang The Computer, Electrical and Mathematical Sciences and Engineering Division King Abdullah University of Science and Technology (KAUST), Thuwal, 23955, SA {shichao.pei, lu.yu, xiangliang.zhang}@kaust.edu.sa
Pseudocode Yes The overall optimization process of our model is given in Algorithm 1. Algorithm 1: OTEA
Open Source Code No The paper does not contain any explicit statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes We used three trilingual knowledge graph datasets from WK31 provided in [Chen et al., 2016; Chen et al., 2018b]. English(En), French(Fr), and German(De) knowledge graphs are included in WK31 datasets, and the KGs are extracted from Person domain of DBpedia with known aligned entities as the ground truth. WK31 includes three datasets with different sizes, as shown in Table 1.
Dataset Splits No The paper states 'We randomly sample 30% of the aligned entities as the training set, and the rest aligned entities for testing.' There is no explicit mention of a validation set.
Hardware Specification No The paper states 'We set same batch size for all methods and run them on a same GPU device, then record the running time of each iteration.' However, it does not specify any particular GPU model or other hardware specifications.
Software Dependencies No The paper mentions using 'Adam' and 'RMSProp' optimizers but does not specify any programming languages, libraries, or other software dependencies with version numbers.
Experiment Setup Yes For our OTEA method, the best configuration is γ = 0.5, α = 0.025, α1 = 2.5, weight clippling c = 0.01. Critics are set as two-layers MLPs with 500 hidden units. We use Adam [Kingma and Ba, 2014] to optimize the Lk, Le +Lr with lr = 0.001, and use RMSProp [Hinton et al., 2012] to optimize the Lg with lr = 5e 5. Meanwhile, we use L2 norm to avoid potential over-fitting.