Depression Detection via Harvesting Social Media: A Multimodal Dictionary Learning Solution
Authors: Guangyao Shen, Jia Jia, Liqiang Nie, Fuli Feng, Cunjun Zhang, Tianrui Hu, Tat-Seng Chua, Wenwu Zhu
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A series of experiments are conducted to validate this model, which outperforms (+3% to +10%) several baselines. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Technology, Tsinghua University; TNList 2 Department of Computer Science and Technology, Shandong University 3 School of Computing, National University of Singapore 4School of Information and Communication Engineering, Beijing University of Posts and Telecommunications |
| Pseudocode | No | The paper provides mathematical formulations and descriptions of algorithms in text, but no explicit pseudocode blocks or algorithm figures are present. |
| Open Source Code | No | The paper states: 'In addition, we release these datasets3 with features to facilitate wellness study for computer science and psychology.' with footnote '3http://depressiondetection.droppages.com/.' While datasets and features are mentioned as released, there is no explicit statement or link confirming the release of the source code for the methodology described in the paper. |
| Open Datasets | Yes | We construct benchmark datasets for online depression detection and analysis, including the well-labeled depression and non-depression datasets as well as a large-scale depression-candidate dataset. In addition, we release these datasets3 with features to facilitate wellness study for computer science and psychology. 3http://depressiondetection.droppages.com/. |
| Dataset Splits | Yes | We trained and tested these methods under 5-fold cross validation, with over 10 randomized experimental runs. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory, cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper mentions several software tools and libraries like Porter Stemmer, word2vec, NLTK toolbox, LIWC, LDA model, and Naive Bayesian (Pedregosa et al., 2011 refers to scikit-learn), but it does not specify version numbers for these software components, which are required for reproducible descriptions. |
| Experiment Setup | Yes | There are three key parameters in the MDL: two regularization parameters, λ in Eqn.(3) and p in Eqn.(5), as well as an implicit parameter D. The search range for λ, p, and D are [0.001, 0.04], [10 5, 10 1], and [50, 200], respectively. ... We finally observed that MDL reached the optimal performance when λ = 0.007, p = 10 2.5, and D = 130. |