Distinguishing discrete and continuous behavioral variability using warped autoregressive HMMs
Authors: Julia Costacurta, Lea Duncker, Blue Sheffer, Winthrop Gillis, Caleb Weinreb, Jeffrey Markowitz, Sandeep R Datta, Alex Williams, Scott Linderman
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using depth-camera recordings of freely moving mice, we demonstrate that the failure of ARHMMs to account for continuous behavioral variability results in duplicate cluster assignments. WARHMM achieves similar performance to the standard ARHMM while using fewer behavioral syllables. Further analysis of behavioral measurements in mice demonstrates that WARHMM identifies structure relating to response vigor. |
| Researcher Affiliation | Academia | Julia C. Costacurta Stanford University jcostac@stanford.edu Lea Duncker Stanford University lduncker@stanford.edu Blue Sheffer Stanford University Winthrop Gillis Harvard Medical School Caleb Weinreb Harvard Medical School Jeffrey E. Markowitz Georgia Institute of Technology, Emory University Sandeep R. Datta Harvard Medical School Alex H. Williams New York University, Flatiron Institute alex.h.williams@nyu.edu Scott W. Linderman Stanford University scott.linderman@stanford.edu |
| Pseudocode | No | The paper describes algorithms but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] Discussed in Supplemental Material |
| Open Datasets | Yes | To do this, we reanalyze data from Wiltschko et al. [5], which represents the original application of the ARHMM to clustering mouse behavior. In the context of this dataset, the ARHMM approach is often also referred to as Mo Seq, and we thus refer to the dataset as the Mo Seq dataset. |
| Dataset Splits | Yes | Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] Discussed in Supplemental Material |
| Hardware Specification | Yes | Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] Discussed in Supplemental Material |
| Software Dependencies | No | The paper mentions that training details are discussed in the supplemental material, but it does not explicitly list software dependencies with specific version numbers in the main text. |
| Experiment Setup | Yes | To provide a direct comparison of performance between the ARHMM, T-WARHMM, and GPWARHMM, we trained each model using 50 epochs of stochastic EM. The ARHMM is equivalent to setting the number of τ-values in T-WARHMM to J = 1. T-WARHMM and GP-WARHMM each had J = 31 evenly spaced values of τ on the interval [ 1, 1]. |