Tackling Mental Health by Integrating Unobtrusive Multimodal Sensing
Authors: Dawei Zhou, Jiebo Luo, Vincent Silenzio, Yun Zhou, Jile Hu, Glenn Currier, Henry Kautz
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this study, we investigate how users online social activities and physiological signals detected through ubiquitous sensors can be utilized in realistic scenarios for monitoring their mental health states. First, we extract a suite of multimodal time-series signals using modern computer vision and signal processing techniques, from recruited participants while they are immersed in online social media that elicit emotions and emotion transitions. Next, we use machine learning techniques to build a model that establishes the connection between mental states and the extracted multimodal signals. Finally, we validate the effectiveness of our approach using two groups of recruited subjects. |
| Researcher Affiliation | Academia | Dawei Zhou, Jiebo Luo, Vincent Silenzio, Yun Zhou, Jile Hu, Glenn Currier, and Henry Kautz University of Rochester Rochester, NY 14627 |
| Pseudocode | No | The paper describes the steps of its algorithms and processes in descriptive text (e.g., 'We use four steps to exploit this effect...', 'We take the following steps to preprocess the raw data:'), but it does not include any explicitly labeled pseudocode blocks or algorithm figures. |
| Open Source Code | No | The paper mentions and links to third-party tools like 'Open CV' and 'Sentiment 140', but it does not provide any statement or link for the source code of the authors' own methodology or implementation. |
| Open Datasets | No | The paper states that data was collected from recruited participants ('We enrolled 27 participants...'), and describes the collection process ('Our experiment is based on 162 videos from 27 people...'). However, it does not provide concrete access information (e.g., URL, DOI, repository, or citation to an established public dataset) for the dataset used in the experiments. |
| Dataset Splits | Yes | To evaluate our entire system, we use a leave-one-subject-out procedure where data from the testing participant is not used in the training phase. |
| Hardware Specification | No | The paper mentions the use of 'webcams built in most mobile devices (laptops, tablets, and smartphones)' for data collection and notes that 'front cameras and processors on most of today s android devices can satisfy our computation needs.' However, it does not provide specific details about the hardware (e.g., CPU/GPU models, memory) used for training or running the actual experiments. |
| Software Dependencies | No | The paper mentions using 'Open CV' and 'Sentiment 140' but does not provide specific version numbers for these or any other software libraries, environments, or dependencies. |
| Experiment Setup | No | The paper describes general aspects of the experimental setup, such as the use of logistic regression and SVM classifiers, a 1-min sliding window, and data normalization to [0,1]. However, it lacks specific details on hyperparameters for the machine learning models (e.g., learning rates, regularization parameters, number of epochs) or other system-level training configurations needed for full reproducibility. |