A Neural Model for Joint Event Detection and Summarization
Authors: Zhongqing Wang, Yue Zhang
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our proposed neural joint model is more effective compared to its pipeline baseline. |
| Researcher Affiliation | Academia | Zhongqing Wang , and Yue Zhang Soochow University, Suzhou, China Singapore University of Technology and Design, Singapore |
| Pseudocode | No | The paper describes algorithmic steps and models using mathematical formulations and descriptive text (e.g., in Section 3.2), but it does not include explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | We release the code and data sets of this paper at https://github. com/wangzq870305/joint_event_detection. |
| Open Datasets | Yes | We collect and annotate two datasets for evaluating the performance of our proposed system, one from the earthquake domain, and another from DDo S attack domain. ... We release the code and data sets of this paper at https://github. com/wangzq870305/joint_event_detection. |
| Dataset Splits | Yes | We randomly choose 10 events as training data for the earthquake domain, 80 events as training data for the DDo S domain, and the remaining events as testing data. |
| Hardware Specification | No | The paper describes the software components and training parameters, but does not specify any particular hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using Adagrad for optimization, dropout, and training word embeddings using the Skip-gram algorithm, but it does not specify version numbers for these or other software libraries or frameworks (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Our training objective is to minimize the cross-entropy loss between the gold labels and predicted labels on those three tasks. We apply online training, adjusting model parameters using Adagrad [Duchi et al., 2011]. In order to avoid overfitting, dropout is used on word embeddings with a ratio of 0.2 [Hinton et al., 2012]. The size of the hidden layers Hd, Hc, and Hs are equally set to 32. We train word embeddings using the Skip-gram algorithm1, and fine-tune them during training. The size of word embeddings is 128. |