OTKGE: Multi-modal Knowledge Graph Embeddings via Optimal Transport

Authors: Zongsheng Cao, Qianqian Xu, Zhiyong Yang, Yuan He, Xiaochun Cao, Qingming Huang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on well-established multi-modal knowledge graph completion benchmarks show that our OTKGE achieves state-of-the-art performance.
Researcher Affiliation Collaboration Zongsheng Cao1,2 Qianqian Xu3 Zhiyong Yang4 Yuan He5 Xiaochun Cao6,1 Qingming Huang4,3,7,8 1 SKLOIS, Institute of Information Engineering, CAS 2 School of Cyber Security, University of Chinese Academy of Sciences 3 Key Lab. of Intelligent Information Processing, Institute of Computing Tech., CAS 4 School of Computer Science and Tech., University of Chinese Academy of Sciences 5 Alibaba Group 6 School of Cyber Science and Tech., Shenzhen Campus, Sun Yat-sen University 7 BDKM, University of Chinese Academy of Sciences 8 Peng Cheng Laboratory
Pseudocode Yes Algorithm 1: Multi-modal representations fusion.
Open Source Code Yes 2https://github.com/Lion-ZS/OTKGE
Open Datasets Yes Dataset. In terms of the link prediction task, we conduct the experiments and evaluate OTKGE with two standard competition benchmarks as shown in Table 1. There includes multi-modal datasets: WN9-IMG [41] and FB-IMG [19].
Dataset Splits Yes Table 1: Statistics of the datasets used in this paper. (Nume represents the number of entities and Numr represents the number of relations.) Dataset ... Training Validation Test
Hardware Specification No In the course of the experiment, we implement OTKGE2 with Py Torch and conduct experiments with a single GPU.
Software Dependencies No In the course of the experiment, we implement OTKGE2 with Py Torch and conduct experiments with a single GPU.
Experiment Setup Yes Specifically, the embedding size k is searched in {100, 200, 400, 500} and the learning rate is searched in {0.001, 0.005, 0.01, 0.05, 0.1}.