Relation-Aware Entity Alignment for Heterogeneous Knowledge Graphs

Authors: Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, Dongyan Zhao

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on three real-world cross-lingual datasets show that our approach delivers better and more robust results over the state-of-the-art alignment methods by learning better KG representations.
Researcher Affiliation Academia 1Institute of Computer Science and Technology, Peking University, China 2School of Computing and Communications, Lancaster University, U. K. {wyting, lxlisa, fengyansong, ruiyan, zhaodongyan}@pku.edu.cn, z.wang@lancaster.ac.uk
Pseudocode No The paper describes its methods using prose and mathematical equations but does not include explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes We evaluate our approach on three large-scale cross-lingual datasets from DBP15K [Sun et al., 2017]. glove.840B.300d 1 http://nlp.stanford.edu/projects/glove/
Dataset Splits No We use the same training/testing split with previous works [Sun et al., 2018], 30% for training and 70% for testing.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper mentions 'glove.840B.300d' for word vectors but does not specify any software environments or library versions used for implementation.
Experiment Setup Yes The configuration we used is: β1 = 0.1, β2 = 0.3, and γ = 1.0. The dimensions of hidden representations in dual and primal attention layers are d = 300, d = 600, and d = 300. All dimensions of hidden representations in GCN layers are 300. The learning rate is set to 0.001 and we sample K = 125 negative pairs every 10 epochs.