Knowledge Graph Embedding by Translating on Hyperplanes
Authors: Zhen Wang, Jianwen Zhang, Jianlin Feng, Zheng Chen
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like Word Net and Freebase. Experiments show Trans H delivers significant improvements over Trans E on predictive accuracy with comparable capability to scale up. |
| Researcher Affiliation | Collaboration | 1Department of Information Science and Technology, Sun Yat-sen University, Guangzhou, China 2Microsoft Research, Beijing, China |
| Pseudocode | No | The paper describes the training process in text but does not provide any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link for the open-source code of the described methodology. |
| Open Datasets | Yes | We use the same two data sets which are used in Trans E (Bordes et al. 2011; 2013b): WN18, a subset of Wordnet; FB15k, a relatively dense subgraph of Freebase where all entities are present in Wikilinks database 1. Both are released in (Bordes et al. 2013b). |
| Dataset Splits | Yes | Table 2: Data sets used in the experiments. Dataset #R #E #Trip. (Train / Valid / Test) |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, or detailed computer specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions models and tools like Trans E, NTN, and Sm2r but does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks). |
| Experiment Setup | Yes | In training Trans H, we use learning rate α for SGD among {0.001, 0.005, 0.01}, the margin γ among {0.25, 0.5, 1, 2}, the embedding dimension k among {50, 75, 100}, the weight C among {0.015625, 0.0625, 0.25, 1.0}, and batch size B among {20, 75, 300, 1200, 4800}. The optimal parameters are determined by the validation set. ... For both datasets, we traverse all the training triplets for 500 rounds. |