Geometry Interaction Knowledge Graph Embeddings

Authors: Zongsheng Cao, Qianqian Xu, Zhiyong Yang, Xiaochun Cao, Qingming Huang5521-5529

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on three well-established knowledge graph completion benchmarks show that our GIE achieves the state-of-the-art performance with fewer parameters.
Researcher Affiliation Academia 1 State Key Laboratory of Information Security, Institute of Information Engineering, CAS, Beijing, China 2 School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 3 Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, CAS, Beijing, China 4 School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, China 5 Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing, China 6 Peng Cheng Laboratory, Shenzhen, China
Pseudocode No The paper does not include any pseudocode or algorithm blocks.
Open Source Code Yes 1https://github.com/Lion-ZS/GIE
Open Datasets Yes We evaluate our approach on the link prediction task using three standard competition benchmarks as shown in Table 2, namely WN18RR (Dettmers et al. 2017), FB15K237 (Dettmers et al. 2017) and YAGO3-10 (Mahdisoltani, Biega, and Suchanek 2013). Each dataset is split into training, validation and testing sets, which is the same as the setting of (Sun et al. 2019).
Dataset Splits Yes Each dataset is split into training, validation and testing sets, which is the same as the setting of (Sun et al. 2019). For each KG, we follow the standard data augmentation protocol by adding inverse rela- tions (Lacroix, Usunier, and Obozinski 2018) to the datasets for baselines. Table 2: Statistics of the datasets used in this paper. (N is the number of entities and M is the number of relations. Refer to Appendix for more details about ξG.) WN18RR 41k 11 87k 3k 3k FB15k-237 15k 237 272k 18k 20k YAGO3-10 123k 37 1M 5k 5k
Hardware Specification No We implement GIE in Py Torch and run experiments on a single GPU. The paper does not specify the GPU model, CPU, or any other detailed hardware specifications.
Software Dependencies No We implement GIE in Py Torch and run experiments on a single GPU. The paper mentions PyTorch but does not provide its version number or any other software dependencies with specific version numbers.
Experiment Setup Yes The hyper-parameters are determined by the grid search. The best models are selected by early stopping on the validation set. In general, the embedding size k is searched in {50, 100, 200, 250}. Learning rate is tuned amongst {0.001, 0.005, 0.05, 0.1}. For some baselines, we report the results in the original papers.