Coordinated Reasoning for Cross-Lingual Knowledge Graph Alignment
Authors: Kun Xu, Linfeng Song, Yansong Feng, Yan Song, Dong Yu9354-9361
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our model achieves the state-of-the-art performance and our reasoning methods can also significantly improve existing baselines. |
| Researcher Affiliation | Collaboration | 1Tencent AI Lab, Seattle, US 2Peking University, Beijing, China 3Sinovation Ventures |
| Pseudocode | Yes | The paper describes the Easy-to-Hard decoding strategy and the Hungarian algorithm using structured 'Step Description' blocks, outlining the procedures like pseudocode: 'Step Description 1 Employ an alignment model to predict alignments for all source entities in the test set. 2 Use a predefined probability threshold α to refine those alignments. ...' |
| Open Source Code | No | The paper does not provide any explicit statement about making the source code available, nor does it include a link to a code repository. |
| Open Datasets | Yes | We evaluate our approach on three large-scale cross-lingual datasets from DBP15K (Sun, Hu, and Li 2017). These datasets are built upon Chinese, English, Japanese and French versions of DBpedia (Auer et al. 2007). |
| Dataset Splits | No | The paper states 'We use the same training/testing split as previous works, 30% for training, and 70% for testing.' but does not explicitly provide a percentage or count for a validation set, only mentioning a 'development set' later without specifics of its split. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' and 'Glove embeddings' but does not specify any software dependencies with version numbers (e.g., Python version, PyTorch/TensorFlow version, specific library versions). |
| Experiment Setup | Yes | For the configurations of the alignment model, we use the same settings as (Xu et al. 2019). Specifically, we use the Adam optimizer (Kingma and Ba 2014) to update parameters with mini-batch size 32. The learning rate is set to 0.001. The hop size of two GCN layers is set to 2 and 3, respectively. For the Easy-to-Hard decoding method, α is set to 0.75, and K is set to 20. For the joint entity alignment, τ is set to 0.10. |