Dependency Structure-Enhanced Graph Attention Networks for Event Detection

Authors: Qizhi Wan, Changxuan Wan, Keli Xiao, Kun Lu, Chenliang Li, Xiping Liu, Dexi Liu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted to validate the effectiveness of our method, and the results confirm its superiority over the state-of-the-art baselines. Our model outperforms the best benchmark with the F1 score increased by 3.5 and 3.4 percentage points on ACE2005 English and Chinese corpus.
Researcher Affiliation Academia Qizhi Wan1,2, Changxuan Wan1,2*, Keli Xiao3 , Kun Lu4, Chenliang Li5, Xiping Liu1,2, Dexi Liu1,2 1School of Information Management, Jiangxi University of Finance and Economics 2Jiangxi Key Lab of Data and Knowledge Engineering 3College of Business, Stony Brook University 4School of Library and Information Studies, The University of Oklahoma 5School of Cyber Science and Engineering, Wuhan University
Pseudocode No The paper describes the model architecture and equations but does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include a statement about releasing source code or a link to a code repository.
Open Datasets Yes We conducted experiments on ACE2005 English and Chinese corpus. ACE2005 is recognized as a benchmark dataset for event detection and has been extensively used in prior research. 1https://catalog.ldc.upenn.edu/LDC2006T06
Dataset Splits No For the evaluation metric, we considered the average F1 obtained from validation as the evaluation metric for our experimental results. However, the paper does not specify the exact train/validation/test split percentages or sample counts.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, memory, or cloud instance types used for running the experiments.
Software Dependencies No The paper mentions using BERT and The Stanford Core NLP tool but does not provide specific version numbers for these software dependencies or any other libraries.
Experiment Setup Yes The batch size, learning late, dropout, and iteration are set as 1,1e-4, 0.2, and 40, respectively. The embedding dimension of dependency relation type is set as 20. The number of GAT layers and attention heads are 2 and 3. The hidden layer size and layers of Bi-LSTM are 100 and 2.