MERL: Multimodal Event Representation Learning in Heterogeneous Embedding Spaces

Authors: Linhai Zhang, Deyu Zhou, Yulan He, Zeng Yang14420-14427

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments are conducted on various multimodal event-related tasks and results show that MERL outperforms a number of unimodal and multimodal baselines, demonstrating the effectiveness of the proposed framework.
Researcher Affiliation Academia 1 School of Computer Science and Engineering, Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, China 2 Department of Computer Science, University of Warwick, UK
Pseudocode Yes Algorithm 1: Training MERL
Open Source Code No The paper does not provide any statement or link indicating that the source code for the proposed MERL framework is openly available.
Open Datasets Yes Multimodal hard similarity dataset: Weber et al. (2018) propose a event similarity dataset... Transitive sentence similarity dataset, (Kartsaklis et al., 2014)... Multiple choice narrative cloze dataset To perform script event prediction, Li, Ding, and Liu (2018) extracted event chains from the New York Gigaword corpus...
Dataset Splits No The paper states: 'The dataset contains 140k samples for training and 10k samples for testing.' for the Multiple choice narrative cloze dataset, but does not explicitly mention a validation split or its size.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used for the experiments.
Experiment Setup No The paper describes the overall model architecture and training procedure, but does not explicitly state specific hyperparameters (e.g., learning rate, batch size, number of epochs) or detailed training configurations.