Writing Stories with Help from Recurrent Neural Networks
Authors: Melissa Roemmele
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | RNNs are trained on sequences of text to model the conditional probability distribution...After training it is straightforward to generate new text...By February 2016, I plan to have completed training the initial RNN model and begun experiments using the writing assistant to evaluate the model. |
| Researcher Affiliation | Academia | Melissa Roemmele Institute for Creative Technologies University of Southern California 12015 Waterfront Dr., Los Angeles, CA 90094 roemmele@ict.usc.edu |
| Pseudocode | No | The paper provides mathematical equations for the RNN recurrence and output calculation but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described, nor does it state that code will be made publicly available. |
| Open Datasets | No | At the current time (September 2015), two requirements of this thesis have already been fulfilled: a dataset of 20 million stories has been prepared. The paper mentions a dataset has been prepared but provides no access information (e.g., link, citation to a public source, or repository). |
| Dataset Splits | No | The paper mentions a 'dataset of 20 million stories' has been prepared and that an 'initial RNN model' will be trained. However, it does not specify any training, validation, or test dataset splits, percentages, or sample counts. |
| Hardware Specification | No | The paper discusses the theoretical and practical aspects of RNNs for story generation but does not specify any hardware details (e.g., GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper describes the use of Recurrent Neural Networks (RNNs) and their components (e.g., softmax classifier) but does not provide specific software names with version numbers (e.g., Python, TensorFlow, PyTorch versions) needed to replicate the experiment. |
| Experiment Setup | No | The paper describes the general architecture and training process of RNNs (e.g., 'minimizing a cost function,' 'back-propagated errors') but does not provide specific experimental setup details such as hyperparameters (learning rate, batch size, number of epochs) or other training configurations. |