TransMS: Knowledge Graph Embedding for Complex Relations by Multidirectional Semantics
Authors: Shihui Yang, Jidong Tian, Honglun Zhang, Junchi Yan, Hao He, Yaohui Jin
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that Trans MS achieves substantial improvements against state-of-the-art baselines, especially the Hit@10s of head entity prediction for N-1 relations and tail entity prediction for 1-N relations improved by about 27.1% and 24.8% on FB15K database respectively. |
| Researcher Affiliation | Academia | 1State Key Lab of Advanced Optical Communication System and Network, Shanghai Jiao Tong University 2Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University 3Department of Computer Science and Engineering, Shanghai Jiao Tong University |
| Pseudocode | No | The paper provides mathematical equations but no explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing the source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We mainly evaluate our model on two typical knowledge graphs, which are built with Word Net [Miller, 1995] and Freebase [Bollacker et al., 2008] databases used in previous models. In addition, we perform comparative experiments on the WN18RR [Dettmers et al., 2017] and FB15K-237 [Toutanova and Chen, 2015] datasets and The four databases consist of training, validation and testing sets which have been well constructed as Table 1. |
| Dataset Splits | Yes | The four databases consist of training, validation and testing sets which have been well constructed as Table 1. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam [Kingma and Ba, 2015]' as an optimizer but does not provide specific version numbers for any software dependencies, libraries, or frameworks used in the implementation. |
| Experiment Setup | Yes | The learning rate β for Adam is set among {0.1, 0.01, 0.001}, the margin γ among {1.0, 1.5, 2.0, 4.0}, the dimension of vectors d among {50, 100, 150, 200}, the minibatch size b among{200, 1200, 4800} and additional regularizers among {ℓ1, ℓ2}. For both WN18 and FB15K, the optimal configurations are: β = 0.001, γ = 2.0, d = 200, b = 4800 and L = ℓ1 on both corrupted means: unif and bern , which are same as the optimal configurations of WN18RR and FB15K-237 on unif . |