Relation-Aware Neighborhood Matching Model for Entity Alignment
Authors: Yao Zhu, Hongzhi Liu, Zhonghai Wu, Yingpeng Du4749-4756
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three real-world datasets demonstrate that the proposed model RNM performs better than state-of-the-art methods. |
| Researcher Affiliation | Academia | 1 Center for Data Science, Peking University, Beijing, China 2 School of Software and Microelectronics, Peking University, Beijing, China 3 National Engineering Center of Software Engineering, Peking University, Beijing, China 4 Key Lab of High Confidence Software Technologies (MOE), Peking University, Beijing, China |
| Pseudocode | Yes | Algorithm 1 Iterative Strategy of RNM |
| Open Source Code | No | The paper states "We utilize Tensor Flow to implement the proposed model RNM." but does not provide an explicit statement about open-sourcing the code or a link to a repository. |
| Open Datasets | Yes | To evaluate the performance of the proposed model, we utilize three cross-lingual datasets from DBP15K as the experimental data. These datasets are subsets of the large-scale knowledge graph DBpedia (Lehmann et al. 2015) |
| Dataset Splits | No | The paper mentions using "15,000 aligned entity pairs" for each dataset and setting "the proportion of seed alignments is set as 30%". While seed alignments serve as training data, it does not explicitly specify a separate validation dataset split or percentages for training, validation, and test sets. |
| Hardware Specification | Yes | The experiments are conducted on a server with two Intel(R) Xeon(R) CPUs E5-2660 @ 2.20GHz, an NVIDIA Tesla P100 GPU and 16 GB memory. |
| Software Dependencies | No | The paper mentions "We utilize Tensor Flow to implement the proposed model RNM." but does not provide a specific version number for TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | We employ a 2-layer GCN to learn the entity embeddings. The dimension of hidden layer in GCN is set as 300. The learning rate is set to 0.001. ... we set the margin γ as 1, threshold δe as 5, threshold δr as 3, λ as 0.001, λe as 10, and λr as 200. We select the nearest 100 entities and the nearest 20 relations as candidates for matching. The number of negative samples for each positive one is set as 125, the maximum number of iterations T is set as 4. We first optimize Eq. (2) for 50 epochs, and then jointly train the embeddings using Eq. (5) for 10 epochs. |