Graph Convolutional Networks With Argument-Aware Pooling for Event Detection
Authors: Thien Nguyen, Ralph Grishman
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The extensive experiments demonstrate the benefits of the dependencybased convolutional neural networks and the entity mentionbased pooling method for event detection. We achieve the state-of-the-art performance on widely used datasets with both perfect and predicted entity mentions. |
| Researcher Affiliation | Academia | Thien Huu Nguyen Department of Computer and Information Science University of Oregon Eugene, Oregon 97403, USA thien@cs.uoregon.edu Ralph Grishman Computer Science Department New York University New York, NY 10003 USA grishman@cs.nyu.edu |
| Pseudocode | No | The paper describes the model in prose and uses equations, but does not provide pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We evaluate the networks in this paper using the widely used datasets for ED, i.e, the ACE 2005 dataset and the TAC KBP 2015 dataset. We employ the ACE 2005 dataset in the setting with golden (perfect) annotation for entity mentions as do the prior work (Nguyen and Grishman 2015; 2016b; Liu et al. 2017). |
| Dataset Splits | Yes | In order to ensure a compatible comparison with the previous work on this dataset (Nguyen and Grishman 2015; Chen et al. 2015; Nguyen and Grishman 2016b; Chen et al. 2017; Liu et al. 2017), we use the same data split with 40 newswire articles for the test set, 30 other documents for the development set and 529 remaining documents for the training set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. |
| Software Dependencies | No | In order to parse the sentences in the datasets, we employ the Stanford Syntactic Parser with the universal dependency relations. The paper mentions software tools like Stanford Syntactic Parser, but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | The parameters are tuned on the development data of the ACE 2005 dataset. The selected values for the parameters include the mini-batch size = 50, the pre-defined threshold for the l2 norms = 3, the dropout rate = 0.5, the dimensionality of the position embeddings and the entity type embeddings = 50 and the number of hidden units for the convolution layers d = 300. |