Image Enhanced Event Detection in News Articles

Authors: Meihan Tong, Shuai Wang, Yixin Cao, Bin Xu, Juanzi Li, Lei Hou, Tat-Seng Chua9040-9047

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct a variety of experiments on our image-enhanced ACE dataset. The overall result strikingly outperforms the current SOTA approaches in ED. The subsequent ablation experiments demonstrate the significance of introducing image modality and the superiority of the proposed DRMM in ED.
Researcher Affiliation Academia Meihan Tong, 1 Shuai Wang, 1 Yixin Cao,2 Bin Xu, 1 Juaizi Li,1 Lei Hou,1 Tat-Seng Chua2 1Tsinghua University, 2National University of Singapore {tongmeihan, caoyixin2011, greener2009}@gmail.com, 18813129752@163.com, {xubin, lijuanzi}@tsinghua.edu.cn, dcscts@nus.edu.sg
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes The code and image dataset are avaliable at https://github.com/ shuaiwa16/image-enhanced-event-extraction. ... We will make all our datasets and source code publicly available once the paper is published.
Open Datasets Yes We employ the publicly available dataset in Event Detection ACE2005. ... We manually recover illustrations of news articles in ACE2005 from the original website 1. ... 1https://www.nytimes.com ... 2https://catalog.ldc.upenn.edu/LDC2006T06
Dataset Splits Yes The size of train/dev/test for ACE2005 is 529/30/40 (Chen et al. 2015).
Hardware Specification No The paper mentions that "all models can be fit into a single GPU" but does not specify the make or model of the GPU or any other hardware components used for the experiments.
Software Dependencies No The paper mentions "tensorflow" but does not provide a specific version number. It also mentions pre-trained models like "BERT" and "Res Net50" but not specific versions of the software dependencies required for replication.
Experiment Setup Yes Our batch size is 32, learning rate being 2e-5, and epoch is 4. Our codes are implemented by tensorflow and all models can be fit into a single GPU with the help of Tensorflow Large Model Support3.