Estimating Temporal Dynamics of Human Emotions

Authors: Seungyeon Kim, Joonseok Lee, Guy Lebanon, Haesun Park

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper explores a temporal model for human emotions that applies to a sequential stream of text documents written by the same author across different time points. Our model is somewhat similar to Brownian motion and the Kalman filter and generalizes the latent space emotion model in Kim et al. (2013). Most papers on sentiment analysis use movie or item reviews, which are poorly suited to temporal modeling. Reviews relate to specific stationary truth and are unlikely to significantly change based on the time of authoring. Blog posts, however, are free expressions of the author s emotions and thus depend more on the time of authoring. Therefore, we demonstrate a temporal model on data consisting of timestamped blog posts. We construct the sentiment or emotion ground truth from an emotion label for the text. The temporal model is useful in two ways. First, it leads to a predictive model that estimates the current emotional state of the author within a specific time context. This predictive model is more accurate than static analysis, which ignores time information. Second, the model is useful in confirming or refuting psychological models concerning human emotions and their dependency on time. Specifically, we reexamine the circadian rhythm model from psychology and investigate its higher order generalizations and its variance across multiple individuals.
Researcher Affiliation Collaboration College of Computing, Georgia Institute of Technology, Atlanta, GA, USA Amazon, Seattle, WA, USA {seungyeon.kim, jlee716}@gatech.edu, glebanon@gmail.com, hpark@cc.gatech.edu
Pseudocode No The paper describes a graphical model in Figure 1 and mathematical formulations for learning and inference, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper provides links to implementations of *third-party* methods used for comparison (e.g., SLDA, Lib SVM, CRF) in its footnotes. However, it does not provide concrete access to the source code for the temporal sentiment method proposed in this paper.
Open Datasets No The paper states that data was gathered by crawling Livejournal from May 2010 to July 2011 and provides the URL for Livejournal (http://www.livejournal.com). However, it does not provide concrete access information (e.g., a specific link to their collected dataset, a DOI, or a formal citation with authors/year) for the specific dataset used in their experiments, nor does it refer to it as a standard public dataset.
Dataset Splits Yes Training and test set were split 50:50 by post-by-post random (not author-based). Post orderings of each author were recovered using timestamps T (i) and author ID after the split. All regularization parameters were chosen by a validation set on training examples using grid search of 5 candidates.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU/CPU models, memory, or cloud computing instance types.
Software Dependencies No The paper mentions several software tools used for comparison (e.g., Supervised Latent Dirichlet Allocation, Lib SVM, Conditional Random Field, Matlab for VARX(1)) and provides links to their implementations. However, it does not specify version numbers for these tools or for any software dependencies directly related to their own proposed method.
Experiment Setup No The paper mentions that "All regularization parameters were chosen by a validation set on training examples using grid search of 5 candidates" and that for SLDA, "the size of latent topics on SLDA was varied from 2 |C| to 5 |C|". However, it does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs) or detailed training configurations for its own proposed model.