Moderated and Drifting Linear Dynamical Systems
Authors: Jinyan Guan, Kyle Simek, Ernesto Brau, Clayton Morrison, Emily Butler, Kobus Barnard
ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test our approach on a real dataset of self-recalled emotional experience measurements of heterosexual couples engaged in a conversation about a potentially emotional topic, with body mass index (BMI) being considered as a moderator. We evaluate several models on their ability to predict future conversation dynamics (the last 20% of the data for each test couple), with shared parameters being learned using held out data. We validate the hypothesis that BMI affects the conversation dynamic in the experimentally chosen topic. |
| Researcher Affiliation | Academia | 1 Department of Computer Science, University of Arizona 2 Department of Computer Science, Boston College 3 School of Information: Science, Technology, and Arts, University of Arizona 4 Norton School of Family and Consumer Sciences, University of Arizona |
| Pseudocode | Yes | Algorithm 1 Sample procedure from p(Q, σ, Θ, X1|Y) ... Algorithm 2 Sample procedure from p(Q, σ, s, Θ, X1|Y) |
| Open Source Code | No | The paper mentions providing web services: 'Predoehl, Andrew, Guan, Jinyan, Butler, Emily, and Barnard, Kobus. Comp Ties web app, 2015. URL http://www.compties.org/COM.html.' This links to a web application, not to the open-source code for the methodology described in the paper. |
| Open Datasets | Yes | The real data is composed of recalled self-rating emotion experience of 38 heterosexual couples with different joint weight status during an emotional conversation in a social experiment lab setting as reported by Reed et al. (2015). |
| Dataset Splits | Yes | We develop a multi-stage evaluation procedure to learn the shared model parameters Q, σ and s and evaluate the predictive power of the learned models using 9-fold cross validation. First, we randomly divide the couples into nine groups. ... we use 100 samples from the posterior for the first 80% of time to compute 100 estimates for each time point in the next 20% (held out) of time for each testing couple. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9') used for the experiments. |
| Experiment Setup | Yes | The number of sampling iterations was 30,000 for parameter learning and 100,000 for parameter fitting. For model comparison, we provide the fitting and prediction errors for three base line models: predicting the last 20% of each couple by 1) a line that fitted to each partner s first 80% observations; 2) the average of the first 80% observations; and 3) a fitted CLO model by maximum likelihood estimation (MLE) of p(Y|Θ) over the first 80% time points. ... We somewhat arbitrarily set 0.5 as the sigma of the observation noise during both parameter learning and fitting. |