Violence Rating Prediction from Movie Scripts

Authors: Victor R. Martinez, Krishna Somandepalli, Karan Singla, Anil Ramakrishna, Yalda T. Uhls, Shrikanth Narayanan671-678

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We tested our models on a dataset of 732 Hollywood scripts annotated by experts for violent content. Our performance evaluation suggests that linguistic features are a good indicator for violent content. Furthermore, our ablation studies show that semantic and sentiment features are the most important predictors of violence in this data.
Researcher Affiliation Academia Victor R. Martinez University of Southern California Los Angeles, CA victorrm@usc.edu Krishna Somandepalli University of Southern California Los Angeles, CA somandep@usc.edu Karan Singla University of Southern California Los Angeles, CA singlak@usc.edu Anil Ramakrishna University of Southern California Los Angeles, CA akramakr@usc.edu Yalda T. Uhls University of California Los Angeles Los Angeles, CA yaldatuhls@gmail.com Shrikanth Narayanan University of Southern California Los Angeles, CA shri@sipi.usc.edu
Pseudocode No The paper includes a figure illustrating the neural network architecture (Figure 1), but it does not contain any pseudocode or algorithm blocks.
Open Source Code Yes The code to replicate all experiments is publicly available4. (footnote 4: https://github.com/usc-sail/mica-violence-ratings-predictions-from-movie-scripts)
Open Datasets Yes We use the movie screenplays collected by (Ramakrishna et al. 2017), an extension to Movie-Di C (Banchs 2012).
Dataset Splits Yes We estimated model s performance and optimal penalty parameter C [0.01, 1, 10, 100, 1000] through nested 5-fold cross validation (CV). Albeit uncommon in most deep-learning approaches, we opted for 5-fold CV to estimate our model s performance.
Hardware Specification No The paper does not specify the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No Linear SVC was implemented using scikit-learn (Pedregosa et al. 2011). RNN models were implemented in Keras (Chollet and others 2015). While specific libraries are mentioned, no version numbers for these software dependencies are provided.
Experiment Setup Yes We used the Adam optimizer with mini-batch size of 16 and learning rate of 0.001. To prevent over-fitting, we use drop-out of 0.5, and train until convergence (i.e., consecutive loss with less than 10 8 difference). Both models were trained with number of hidden units H [4, 8, 16, 32].