Controllable Neural Story Plot Generation via Reward Shaping

Authors: Pradyumna Tambwekar, Murtaza Dhuliawala, Lara J. Martin, Animesh Mehta, Brent Harrison, Mark O. Riedl

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Automated evaluations show our technique can create a model that generates story plots which consistently achieve a specified goal. Human-subject studies show that the generated stories have more plausible event ordering than baseline plot generation techniques.
Researcher Affiliation Academia 1School of Interactive Computing, Georgia Institute of Technology 2Department of Computer Science, University of Kentucky {ptambwekar3, murtaza.d.210, ljmartin, animesh.mehta}@gatech.edu, harrison@cs.uky.edu, riedl@cc.gatech.edu
Pseudocode No The paper does not include a pseudocode block or a clearly labeled algorithm.
Open Source Code No The paper does not provide a statement or link for open-source code for the described methodology.
Open Datasets Yes We use the CMU movie summary corpus [Bamman et al., 2013]. ... The romance corpus was split into 90% training, and 10% testing data.
Dataset Splits No The paper states 'The romance corpus was split into 90% training, and 10% testing data.' but does not specify a separate validation split.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models) used for its experiments.
Software Dependencies No The paper mentions using 'Tensorflow' but does not specify its version number or any other software dependencies with their versions.
Experiment Setup Yes Both the encoder and the decoder comprised of LSTM units, with a hidden layer size of 1024. The network was pre-trained for a total of 200 epochs using minibatch gradient descent and batch size of 64.