What Happens Next? Future Subevent Prediction Using Contextual Hierarchical LSTM
Authors: Linmei Hu, Juanzi Li, Liqiang Nie, Xiao-Li Li, Chao Shao
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on a real-world dataset demonstrate the superiority of our model over several state-of-the-art methods. |
| Researcher Affiliation | Academia | 1 Department of Computer Science and Technology, Tsinghua University, China 2 School of Computer Science and Technology, Shandong University, China 3 Institute for Infocomm Research, A*STAR, Singapore |
| Pseudocode | No | The paper describes the model architecture and steps but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | Yes | We therefore crawled a large-scale Chinese news event dataset containing 15,254 news series from Sina News2. 2http://news.sina.com.cn/zt/ |
| Dataset Splits | Yes | After preprocessing, we randomly split all the events into three parts: 80% for training, 10% for validation and the remaining 10% for test. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions "ICTCLAS3" and "SRILM tool (Stolcke and others 2002)" but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | The optimal parameter values are given as follows. 1) LSTM parameters and word embedding were initialized from a uniform distribution between [-0.08, 0.08]; 2) Learning rate = 0.1; 3) Batch size = 32; 4) Dropout rate = 0.2; 5) The dimension of word embeddings and topic embeddings = 100, and the dimension of hidden vector D = 400; 6) The number of hidden layers of the LSTM networks = 2; 7) The topic number = 1,000. |