5* Knowledge Graph Embeddings with Projective Transformations
Authors: Mojtaba Nayyeri, Sahar Vahdati, Can Aykul, Jens Lehmann9064-9072
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our evaluation shows that 5 E outperforms existing models on standard benchmarks. Experiments and Results Experimental Setup Following the best practices of evaluations for embedding models, we consider the most-used metrics (Mean) Reciprocal Rank (MRR) and Hits@n (n = 1, 3, 10). We evaluated our model on four widely used benchmark datasets namely FB15k-237 (Toutanova and Chen 2015), WN18RR (Dettmers et al. 2018) , and NELL (four different versions as NELL-995-h25, NELL995-h50, NELL-995-h75 and NELL-995-h100) (Xiong, Hoang, and Wang 2017; Balazevic, Allen, and Hospedales 2019a). Results. The results of comparing 5 E to other models on FB15k-237 and WN18RR are shown in Table 1 (d = 100 and 500) and on NELL in Table 2 (d = 100 and 200). Our model outperforms all other models across all metrics on WN18RR |
| Researcher Affiliation | Collaboration | Mojtaba Nayyeri1,2, Sahar Vahdati2, Can Aykul1, Jens Lehmann1,3 1Smart Data Analytics Group, University of Bonn, Germany 2Nature-Inspired Machine Intelligence-Inf AI, Dresden, Germany 3Fraunhofer IAIS, Dresden, Germany |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our model is implemented in Pytorch1 and the code is available online2. 2https://bit.ly/2NXplO1 |
| Open Datasets | Yes | We evaluated our model on four widely used benchmark datasets namely FB15k-237 (Toutanova and Chen 2015), WN18RR (Dettmers et al. 2018) , and NELL (four different versions as NELL-995-h25, NELL995-h50, NELL-995-h75 and NELL-995-h100) (Xiong, Hoang, and Wang 2017; Balazevic, Allen, and Hospedales 2019a). |
| Dataset Splits | No | The paper refers to using N3 regularization and adding reverse counterparts of each triple to the train set, but does not explicitly state train/validation/test splits by percentage or count. It mentions: "Similar to Quat E and Compl Ex, we developed our model on top of a standard framework (Lacroix, Usunier, and Obozinski 2018), applied 1-N scoring loss with N3 regularization, and added reverse counterparts of each triple to the train set." |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments. |
| Software Dependencies | No | The paper states: "Our model is implemented in Pytorch1" but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | No | The paper mentions applying "1-N scoring loss with N3 regularization" and adding "reverse counterparts of each triple to the train set", but it lacks specific hyperparameter values like learning rate, batch size, number of epochs, or optimizer settings typically found in an experimental setup description. |