Incorporating Schema-Aware Description into Document-Level Event Extraction

Authors: Zijie Xu, Peng Wang, Wenjun Ke, Guozheng Li, Jiajun Liu, Ke Ji, Xiye Chen, Chenxiao Wu

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show the superiority of SEELE, achieving notable improvements (2.1% to 9.7% F1) on three NDEE datasets and competitive performance on two DEAE datasets.
Researcher Affiliation Academia School of Computer Science and Engineering, Southeast University Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education Nanjing University of Finance and Economics {zijiexu, pwang, kewenjun, gzli, jiajliu, keji}@seu.edu.cn
Pseudocode No The paper describes the method verbally and with architecture diagrams but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https: //github.com/Theory Rhapsody/SEELE.
Open Datasets Yes NDEE Benchmarks: (1) Ch Fin Ann dataset [Zheng et al., 2019] is a widely used financial dataset without trigger annotations. (2) Du EE-Fin dataset [Han et al., 2022] is another classic financial DEE dataset. (3) FNDEE 1 is a recent military news dataset. DEAE Benchmarks: (1) RAMS [Ebner et al., 2020] is a typical DEAE news dataset containing news articles from Reddit. (2) Wiki Events [Li et al., 2021] is another commonly used DEAE dataset based on English Wikipedia articles.
Dataset Splits Yes We follow the official train/dev/test split. Since Du EE-Fin has not released gold labels in the test set and the online evaluation does not cover trigger extraction, we follow previous work [Liang et al., 2022; Wang et al., 2023] that uses the development set as the test set and split 500 documents from the training set as the development set.
Hardware Specification No The paper does not specify the hardware used for running the experiments (e.g., specific GPU or CPU models).
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the experiments.
Experiment Setup No The paper describes the overall optimization strategy and loss function but does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs, optimizer details) for the experimental setup.