LAFA: Multimodal Knowledge Graph Completion with Link Aware Fusion and Aggregation

Authors: Bin Shang, Yinliang Zhao, Jun Liu, Di Wang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on standard datasets validate that LAFA can obtain state-of-the-art performance. (Abstract) ... Extensive experiments on standard datasets validate that LAFA can obtain state-of-the-art performance. (Abstract) ... Experiments Experimental Setup Datasets We evaluate our proposed model by two publicly available multimodal datasets: 1) FB15k-237-IMG (Bordes et al. 2013; Chen et al. 2022) and WN18-IMG (Bordes et al. 2013). ... Table 1 presents the link prediction results on FB15k237-IMG and WN18-IMG datasets.
Researcher Affiliation Academia 1Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, School of Computer Science and Technology, Xi an Jiaotong University, China 2National Engineering Lab for Big Data Analytics, Xi an Jiaotong University, China 3School of Computer Science and Technology, Xidian University, China
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about releasing open-source code for the methodology or a link to a code repository.
Open Datasets Yes We evaluate our proposed model by two publicly available multimodal datasets: 1) FB15k-237-IMG (Bordes et al. 2013; Chen et al. 2022) and WN18-IMG (Bordes et al. 2013).
Dataset Splits Yes Table 2: Statistics of the datasets. Datasets Entities Relations Train triples Validation triples Test triples FB15k-237-IMG 14,541 237 272,115 17,535 20,466 WN18-IMG 40,943 18 141,442 5,000 5,000
Hardware Specification Yes The experiments are implemented using the Py Torch (Paszke et al. 2017) framework, and are performed on single NVIDIA Ge Force RTX2080Ti GPU.
Software Dependencies No The paper mentions "Py Torch (Paszke et al. 2017)" but does not specify a version number for PyTorch or any other software dependencies with version numbers.
Experiment Setup Yes We define the threshold ξ = 0.1. For all MKG datasets, the best performing hyper-parameters are found by grid search on the validation set. And the candidate hyper-parameters are selected in the following ranges: batch size {128, 512, 1024}, number of epochs {500, 1000, 2000}, dropout rate {0.1, 0.2, 0.3}, learning rate {0.001, 0.002, 0.003}, embedding dimensions {100, 200, 300, 400, 500}, attention head number P and Q {1, 2, 3, 4}.