Dual Quaternion Knowledge Graph Embeddings
Authors: Zongsheng Cao, Qianqian Xu, Zhiyong Yang, Xiaochun Cao, Qingming Huang6894-6902
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on four real-world datasets demonstrate the effectiveness of our Dual E method. |
| Researcher Affiliation | Academia | 1 State Key Laboratory of Information Security, Institute of Information Engineering, CAS, Beijing, China 2 School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 3 Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, CAS, Beijing, China 4 School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, China 5 Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing, China 6 Peng Cheng Laboratory, Shenzhen, China {caozongsheng,yangzhiyong,caoxiaochun}@iie.ac.cn, xuqianqian@ict.ac.cn, qmhuang@ucas.ac.cn |
| Pseudocode | Yes | Please see Appendix in Algorithm 1 for more details of the training algorithm and the initialization scheme. |
| Open Source Code | Yes | For the appendix and the code, please refer to https://github.com/Lion-ZS/Dual E. |
| Open Datasets | Yes | Datasets: WN18 (Bordes et al. 2013), FB15K (Bordes et al. 2013), WN18RR (Dettmers et al. 2017), FB15K-237 (Toutanova and Chen 2015) |
| Dataset Splits | Yes | Table 3: Number of entities, relations, and observed triplets in each split for four benchmarks. Dataset #entity #relation #training #validation #test FB15K 14,951 1,345 483,142 50,000 59,071 WN18 40,943 18 141,442 5,000 5,000 FB15k-237 14,541 237 272,115 17,535 20,466 WN18RR 40,943 11 86,835 3,034 3,134 |
| Hardware Specification | No | The paper states 'Our settings for hyper-parameter selection using pytorchare as follows:' but does not specify any hardware details such as GPU model, CPU type, or memory. |
| Software Dependencies | No | The paper mentions using 'pytorch' but does not provide a specific version number for it or any other software dependencies. |
| Experiment Setup | Yes | Our settings for hyper-parameter selection using pytorchare as follows: The embedding size k is selected in {50, 100, 150, 200, 250}. The regularization rates λ1 and λ2 are adjusted in {0, 0.02, 0.05, 0.1, 0.15, 0.2}. The learning rate is chosen from 0.02 to 0.1, and different learning rates can be selected according to different datasets. In addition, we create 10 batches of training samples for all the datasets above. |