A Character-Centric Neural Model for Automated Story Generation
Authors: Danyang Liu, Juntao Li, Meng-Hsuan Yu, Ziming Huang, Gongshen Liu, Dongyan Zhao, Rui Yan1725-1732
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on open dataset indicate that our model yields meaningful improvements over several strong baselines on both human and automatic evaluations. |
| Researcher Affiliation | Collaboration | Danyang Liu,1,2 Juntao Li,1,3 Meng-Hsuan Yu,1,3 Ziming Huang,4 Gongshen Liu,2 Dongyan Zhao,1,3 Rui Yan1,2,3 1Wangxuan Institute of Computer Technology, Peking University, Beijing, China 2Key Laboratory of Artificial Intelligence, Ministry of Education, Shanghai Jiao Tong University, Shanghai, China 3Center for Data Science, AAIS, Peking University, Beijing, China 4IBM Research-China, Beijing, China |
| Pseudocode | No | The paper provides architectural diagrams (Figure 1 and Figure 2) but no structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code will be available at https://github.com/liudany/character-centric. |
| Open Datasets | Yes | We conduct experiments on a corpus of movie plot summaries extracted from Wikipedia (Bamman, O Connor, and Smith 2013) |
| Dataset Splits | Yes | We randomly split the corpus into 34,306/4,000/4,000 stories for training, validating and testing respectively. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for running experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper mentions 'Mosesdecoder tools', 'Stanford Core NLP library', and 'Adam optimization algorithm' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | For sentence generator, both encoder and decoder are composed of 1 layer with 512dimensional hidden states. The balancing hyper-parameters α and β are set to 1 and 0.8 respectively. The character embedding is set to 512 which is the same as word embedding size. Word embeddings are randomly initialized and shared across the model. We use Adam optimization algorithm (Kingma and Ba 2014) with learning rate α = 0.001. |