Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction
Authors: Zhanqiu Zhang, Jianyu Cai, Yongdong Zhang, Jie Wang3065-3072
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that HAKE can effectively model the semantic hierarchies in knowledge graphs, and significantly outperforms existing state-of-the-art methods on benchmark datasets for the link prediction task. |
| Researcher Affiliation | Academia | University of Science and Technology of China {zzq96, jycai}@mail.ustc.edu.cn {zhyd73, jiewangx}@ustc.edu.cn |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The code of HAKE is available on Git Hub at https://github.com/ MIRALab-USTC/KGE-HAKE. |
| Open Datasets | Yes | We evaluate our proposed models on three commonly used knowledge graph datasets WN18RR (Toutanova and Chen 2015), FB15k-237 (Dettmers et al. 2018), and YAGO3-10 (Mahdisoltani, Biega, and Suchanek 2013). |
| Dataset Splits | Yes | We use Adam (Kingma and Ba 2015) as the optimizer, and use grid search to find the best hyperparameters based on the performance on the validation datasets. ... Table 2: Statistics of datasets. The symbols #E and #R denote the number of entities and relations, respectively. #TR, #VA, and #TE denote the size of train set, validation set, and test set, respectively. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions using “Adam (Kingma and Ba 2015) as the optimizer” but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or other libraries). |
| Experiment Setup | Yes | We use Adam (Kingma and Ba 2015) as the optimizer, and use grid search to find the best hyperparameters based on the performance on the validation datasets. To make the model easier to train, we add an additional coefficient to the distance function, i.e., dr(h, t) = λ1dr,m(hm, tm) + λ2dr,p(hp, tp), where λ1, λ2 R. |