Multimodal Analogical Reasoning over Knowledge Graphs

Authors: Ningyu Zhang, Lei Li, Xiang Chen, Xiaozhuan Liang, Shumin Deng, Huajun Chen

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate with multimodal knowledge graph embedding and pre-trained Transformer baselines, illustrating the potential challenges of the proposed task. We further propose a novel model-agnostic Multimodal analogical reasoning framework with Transformer (Mar T) motivated by the structure mapping theory, which can obtain better performance.
Researcher Affiliation Academia 1Zhejiang University, AZFT Joint Lab for Knowledge Engine 2National University of Singapore {zhangningyu,leili21,xiang chen,liangxiaozhuan,231sm,huajunsir}@zju.edu.cn
Pseudocode No The paper describes its methods and includes mathematical equations, but it does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code and datasets are available in https://github.com/zjunlp/MKG_Analogy.
Open Datasets Yes Specifically, we construct a Multimodal Analogical Reasoning data Set (MARS) and a multimodal knowledge graph Mar KG and a multimodal knowledge graph Mar KG to support this task. Code and datasets are available in https://github.com/zjunlp/MKG_Analogy. with linked external entities in Wikidata and images from Laion-5B (Schuhmann et al., 2021).
Dataset Splits Yes MARS has 10,685 training, 1,228 validation and 1,415 test instances, which are more significant than previous language analogy datasets.
Hardware Specification Yes We utilize Pytorch to conduct all experiments with 1 Nvidia 3090 GPU.
Software Dependencies No The paper mentions using 'Pytorch' but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes The details of hyper-parameters can be seen in Table 8. Hyper-parameters MKGE Baselines MPT Baselines epoch {300, 1000} 15 sequence length 128 learning rate {1e-2, 5e-3} {3e-5, 4e-5, 5e-5} batch size 1000 64 optimizer {Adagrad, SGD} Adam W adam epsilon 1e-8 λ {0.38, 0.43, 0.45}