SSP: Semantic Space Projection for Knowledge Graph Embedding with Text Descriptions
Authors: Han Xiao, Minlie Huang, Lian Meng, Xiaoyan Zhu
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our method achieves substantial improvements against baselines on the tasks of knowledge graph completion and entity classification. |
| Researcher Affiliation | Academia | Han Xiao, Minlie Huang, Lian Meng, Xiaoyan Zhu State Key Lab. of Intelligent Technology and Systems, National Lab. for Information Science and Technology, Dept. of Computer Science and Technology, Tsinghua University, Beijing 100084, PR China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The URL provided "http://www.ibookman.net/conference.html" is a general project/conference page, not a direct link to the source code repository for the methodology described in this paper. |
| Open Datasets | Yes | Our experiments are conducted on three public benchmark datasets that are the subsets of Wordnet and Freebase. About the statistics of these datasets, we strongly suggest the readers to refer to (Xie et al. 2016) and (Lin et al. 2015). The entity descriptions of FB15K and FB20K are the same as DKRL (Xie et al. 2016)... The textual information of WN18 is the definitions that we extract from the Wordnet. |
| Dataset Splits | Yes | When we filter out the corrupted triples that exist in the training, validation, or test datasets, this is the Filter setting. We have attempted several settings on the validation dataset to get the best configuration. |
| Hardware Specification | No | The paper mentions running times ("Trans E costs 0.28s for one round in Link Prediction and our model costs 0.36s in the same setting") but does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers. |
| Experiment Setup | Yes | Under the bern. sampling strategy, the optimal configurations of our model SSP are as follows. For WN18, embedding dimension d = 100, learning rate α = 0.001, margin γ = 6.0, balance factor λ = 0.2 and for SSP(Joint) μ = 0.1. For FB15K, embedding dimension d = 100, learning rate α = 0.001, margin γ = 1.8, balance factor λ = 0.2 and for SSP(Joint) μ = 0.1. |