Selecting Optimal Context Sentences for Event-Event Relation Extraction
Authors: Hieu Man, Nghia Trung Ngo, Linh Ngo Van, Thien Huu Nguyen11058-11066
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of the proposed method with state-of-the-art performance on benchmark datasets. |
| Researcher Affiliation | Collaboration | 1 Vin AI Research, Vietnam 2 Hanoi University of Science and Technology, Vietnam 3Department of Computer and Information Science, University of Oregon, USA |
| Pseudocode | No | The paper describes its method in prose but does not provide any pseudocode blocks or figures. |
| Open Source Code | No | The paper does not contain any statement about releasing code or links to a repository. |
| Open Datasets | Yes | Datasets: For subevent relation extraction, we evaluate our models on the Hi Eve dataset (Glavaˇs et al. 2014) to make it consistent with prior work (Wang et al. 2020; Zhou et al. 2020). For temporal event relation extraction, we employ the popular dataset MATRES (Ning, Wu, and Roth 2018c) for model evaluation as in previous studies (Han et al. 2019b; Wang et al. 2020; Zhao, Lin, and Durrett 2021; Mathur et al. 2021). |
| Dataset Splits | Yes | For Hi Eve, we employ the split with 80 documents for training (with 35,001 event pairs) and 20 documents for testing (with 7,093 event pairs) as in (Wang et al. 2020). For MATRES, we apply the standard spit as in prior work (Han, Ning, and Peng 2019a; Ning, Subramanian, and Roth 2019; Wang et al. 2020), featuring 183/20 documents with 6332/827 event pairs for the training/test portions (respectively). MATRES also reserves 72 documents for development purpose (Han, Ning, and Peng 2019a; Wang et al. 2020). Finally, inherited from (Naik, Breitfeller, and Rose 2019; Mathur et al. 2021), our data splits involve 4000/650/1500 and 32609/1435/4258 event pairs in the training/development/test data for the TDDMan and TDDAuto datasets (respectively). We fine-tune the hyper-parameters in our model using the development set of the MATRES dataset. |
| Hardware Specification | No | The paper does not mention specific GPU/CPU models or other hardware details used for experiments. |
| Software Dependencies | No | The paper mentions 'transformer-based language models (e.g., BERT)' and 'RoBERTa model (Liu et al. 2019)' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | No | The paper mentions fine-tuning hyperparameters but does not provide their specific numerical values (e.g., learning rate, batch size, epochs) in the main text. |