Plan-and-Write: Towards Better Automatic Storytelling

Authors: Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, Rui Yan7378-7385

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that with explicit storyline planning, the generated stories are more diverse, coherent, and on topic than those generated without creating a full plan, according to both automatic and human evaluations.
Researcher Affiliation Collaboration Lili Yao,1,3 Nanyun Peng,2 Ralph Weischedel,2 Kevin Knight,2 Dongyan Zhao,1 Rui Yan1 liliyao@tencent.com, {npeng,weisched,knight}@isi.edu {zhaodongyan,ruiyan}@pku.edu.cn 1Institute of Computer Science and Technology, Peking University 2Information Sciences Institute, University of Southern California, 3Tencent AI Lab
Pseudocode No The paper describes mathematical formulations and model architectures but does not include any pseudocode or algorithm blocks.
Open Source Code Yes Code and appendix will be available at https://bitbucket.org/Violet Peng/language-model
Open Datasets Yes We conduct the experiments on the ROCStories corpus (Mostafazadeh et al. 2016a).
Dataset Splits Yes We split the original training data into 8:1:1 for training, validation, and testing.
Hardware Specification No The paper does not specify any hardware details such as specific GPU or CPU models used for running the experiments.
Software Dependencies No The paper mentions neural generation models and SGD but does not provide specific version numbers for any software libraries, frameworks, or programming languages used.
Experiment Setup Yes We train all the models using stochastic gradient descent (SGD). For the encoder and decoder in our generation models, we tune the hyper-parameters of the embedding and hidden vector dimensions and the dropout rate by grid search. We randomly initialize the word embeddings and tune the dimensions in the range of [100, 200, 300, 500] for storyline generation and [300, 500, 1000] for story generation. We tune the hidden vector dimensions in the range of [300, 500, 1000]. The embedding and hidden vector dropout rates are all tuned from 0 to 0.5, step by 0.1. We tune all baselines and proposed models based on BLEU scores (Papineni et al. 2002) on the validation set. Details of the best hyper-parameter values for each setting are given in Appendix.