Entity Alignment between Knowledge Graphs Using Attribute Embeddings
Authors: Bayu Distiawan Trisedya, Jianzhong Qi, Rui Zhang297-304
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments using real-world knowledge bases show that our proposed model achieves consistent improvements over the baseline models by over 50% in terms of hits@1 on the entity alignment task. |
| Researcher Affiliation | Academia | School of Computing and Information Systems, The University of Melbourne |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and the dataset are made available at http://www.ruizhang.info/GKB/gkb.htm |
| Open Datasets | Yes | We evaluate our model on four real KGs including DBpedia (DBP) (Lehmann et al. 2015), Linked Geo Data (LGD) (Stadler et al. 2012), Geonames (GEO)2, and YAGO (Hoffart et al. 2013). We compare the aligned entities found by our model with those in three ground truth datasets, DBP-LGD, DBPGEO, and DBP-YAGO, which contain aligned entities3 between DBP and LGD, GEO, and YAGO, respectively. |
| Dataset Splits | No | The paper mentions training models with a batch size and epochs, and uses 30% of gold standard as seed alignments for baseline models, but it does not specify explicit train/validation/test splits (e.g., percentages or counts) for its own model or how data was partitioned for validation. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for the experiments. |
| Software Dependencies | No | The paper does not mention specific software names with version numbers. |
| Experiment Setup | Yes | We choose the embeddings dimensionality d among {50, 75, 100, 200}, the learning rate of the Adam optimizer among {0.001, 0.01, 0.1}, and the margin γ among {1, 5, 10}. We train the models with a batch size of 100 and a maximum of 400 epochs. |