Neighborhood-Aware Attentional Representation for Multilingual Knowledge Graphs
Authors: Qiannan Zhu, Xiaofei Zhou, Jia Wu, Jianlong Tan, Li Guo
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our model on two real-world datasets DBP15K and DWY100K, and the experimental results show that the proposed model NAEA significantly and consistently outperforms state-of-the-art entity alignment models. |
| Researcher Affiliation | Academia | 1Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China 2School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 3Department of Computing, Macquarie University, Sydney, Australia |
| Pseudocode | No | The paper describes the model architecture and mathematical formulations but does not provide a structured pseudocode or algorithm block. |
| Open Source Code | No | No explicit statement about releasing source code or a link to a code repository was found in the paper. |
| Open Datasets | Yes | We conduct experiments on two real-world datasets DBP15K and DWY100K. DBP15K [Sun et al., 2017] is selected from the multilingual versions of DBpedia that includes entity alignment links from entities of English version to those in other languages. [...] DWY100K [Sun et al., 2018] is built from three large-scale multi-lingual knowledge graph DBpedia, Wikidata and YAGO3. |
| Dataset Splits | Yes | In this experiment, we randomly select monolingual triplets from DBP15KZH-EN and DBP-WD to organize the training, valid and test set according to ratio 8 : 1 : 1. |
| Hardware Specification | No | No specific hardware details (GPU/CPU models, memory) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers were mentioned in the paper. |
| Experiment Setup | Yes | In our model, we set the maximum number of neighbors n as 200, and select the dimension of entity(relation) embeddings m from {50, 75, 100, 150, 200}, the learning rate η from {0.001, 0.01, 0.1}, β from {0, 0.2, 0.4, 0.6, 0.8, 1}, λ from {0.1, 0.5, 1, 1.5, 2}, µ1 from {0.5, 1, 2, 3, 4}, µ2 from {0.01, 0.1, 0.5, 0.8, 1, 1.5, 2}, γ from {0.1, 0.5, 1, 1.5, 2, 2.5}, the number of head K from {1, 2, 4, 6, 8}. For our model, the best optimal parameter configurations are m = 75, β = 0.8, λ = 1, µ1 = 1, µ2 = 0.1, γ = 2, K = 4, η = 0.01. For each positive triplet, we select 10 negative triples for training, and set the training epochs as 1000. |