Story Ending Generation with Incremental Encoding and Commonsense Knowledge
Authors: Jian Guan, Yansen Wang, Minlie Huang6473-6480
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Automatic and manual evaluation shows that our model can generate more reasonable story endings than state-of-the-art baselines. We conducted the automatic evaluation on the 8,162 stories (the entire test set). The results of the automatic evaluation are shown in Table 1. |
| Researcher Affiliation | Academia | Dept. of Computer Science & Technology, Tsinghua University, Beijing 100084, China Institute for Artificial Intelligence, Tsinghua University (THUAI), China Beijing National Research Center for Information Science and Technology, China guanj15@mails.tsinghua.edu.cn;ys-wang15@mails.tsinghua.edu.cn; aihuang@tsinghua.edu.cn |
| Pseudocode | No | The paper describes the model architecture and equations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our codes and data are available at https://github.com/ Jian Guan THU/Story End Gen. |
| Open Datasets | Yes | We evaluated our model on the ROCStories corpus (Mostafazadeh et al. 2016a). The corpus contains 98,162 five-sentence stories for evaluating story understanding and script learning. We randomly selected 90,000 stories for training and the left 8,162 for evaluation. |
| Dataset Splits | No | The paper states, 'We randomly selected 90,000 stories for training and the left 8,162 for evaluation.' It does not explicitly mention a separate validation split or its size. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Glo Ve.6B (Pennington, Socher, and Manning 2014) is used as word vectors' but does not provide specific version numbers for any software dependencies like programming languages, libraries, or frameworks. |
| Experiment Setup | Yes | The parameters are set as follows: Glo Ve.6B (Pennington, Socher, and Manning 2014) is used as word vectors, and the vocabulary size is set to 10,000 and the word vector dimension to 200. We applied 2-layer LSTM units with 512-dimension hidden states. These settings were applied to all the baselines. The parameters of the LSTMs (Eq. 5 and 6) are shared by the encoder and the decoder. |