Tackling Uncertain Correspondences for Multi-Modal Entity Alignment

Authors: Liyi Chen, Ying Sun, Shengzhe Zhang, Yuyang Ye, Wei Wu, Hui Xiong

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two real-world datasets validate the effectiveness of TMEA with a clear improvement over competitive baselines.
Researcher Affiliation Academia Liyi Chen1, Ying Sun2, Shengzhe Zhang1, Yuyang Ye3, Wei Wu1, Hui Xiong2,4 1 University of Science and Technology of China, 2 Thrust of Artificial Intelligence, The Hong Kong University of Science and Technology (Guangzhou), 3 Rutgers University, 4 Department of Computer Science and Engineering, The Hong Kong University of Science and Technology
Pseudocode Yes Algorithm 1 Bi-Directional Iterative Strategy for MMKGs
Open Source Code Yes 1The code is available at https://github.com/liyichen-cly/TMEA.
Open Datasets Yes Following previous studies [5, 20], we selected two commonly used public datasets, namely FB15K-DB15K and FB15K-YG15K [23]
Dataset Splits Yes allocated 20% of the aligned entity pairs as training data. ... Train model on G1, G2, S until the loss of validation set does not decrease
Hardware Specification Yes The experiments were conducted on a server with two Intel Xeon Silver 4214R CPUs @ 2.40GHz, four NVIDIA Ge Force RTX 3090 GPUs, and 256 GB RAM memory.
Software Dependencies No Our model was implemented using the framework of PyTorch 7. ... We employed a pre-trained Vision Transformer (ViT) [14] ... Then, we use a pre-trained BERT [11] model... (No specific version numbers are provided for PyTorch, ViT, or BERT models used for the experiments).
Experiment Setup Yes The dimensions of all entity features and r were 100. In VAEs, the encoder and decoder were both composed of two fully connected layers, and the dimension of the latent representations was set to 64. The number of heads η is 2. In LT rans E, we set γ as 1. In Lcl, γcl was 2. In the overall objective, λ1 and λ2 were 1e-2. We used the mini-batch method with a batch size of 5000. The learning rate was 0.001 and Adam optimizer [17] was adopted.