PhoneMD: Learning to Diagnose Parkinson’s Disease from Smartphone Data
Authors: Patrick Schwab, Walter Karlen1118-1125
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that our attentive deep-learning models achieve significant improvements in predictive performance over strong baselines (area under the receiver operating characteristic curve = 0.85) in data from a cohort of 1853 participants. We perform experiments on real-world data collected from 1853 m Power participants. |
| Researcher Affiliation | Academia | Patrick Schwab Institute of Robotics and Intelligent Systems ETH Zurich, Switzerland patrick.schwab@hest.ethz.ch Walter Karlen Institute of Robotics and Intelligent Systems ETH Zurich, Switzerland walter.karlen@ieee.org |
| Pseudocode | No | The paper describes methods textually and with diagrams (Figure 1, Figure 2, Figure 3) but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement or link indicating that the authors' source code for the described methodology is publicly available. |
| Open Datasets | Yes | We utilise data collected during the m Power study, a large-scale observational study about PD conducted entirely through a smartphone app (Bot et al. 2016). The data used in this manuscript were contributed by users of the Parkinson m Power mobile application as part of the m Power study developed by Sage Bionetworks and described in Synapse (doi:10.7303/syn4993293). |
| Dataset Splits | Yes | We performed a random split stratified by participant age to divide the available dataset into a training set (70%), validation set (10%), and test set (20%). Each participant and the tests they performed were assigned to exactly one of the three folds without any overlap (Table 3). Table 3: Subjects (#) 1314 (70%) 192 (10%) 347 (20%). |
| Hardware Specification | No | We acknowledge the support of NVIDIA Corporation with the donation of the GPUs used for this research. While GPUs are mentioned, no specific models or detailed hardware specifications (e.g., CPU, RAM, or specific GPU model numbers) are provided for the experimental setup. |
| Software Dependencies | No | The paper does not specify version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | For the RF models, we used 512 to 1024 trees in the forest and a maximum tree depth between 3 and 5. For all neural networks, we used dropout of 0 to 70% between hidden layers, an L2 penalty of 0, 0.0001 or 0.00001, and varying numbers of layers and hidden units depending on the test type (Appendix C). For the EAM, we used 2 to 5 stacked BLSTM layers with 16 to 64 hidden units each. We optimised the neural networks binary cross-entropy for up to 500 epochs with a learning rate of 0.0001, a batch size of 32, and an early stopping patience of 12 epochs on the validation set. For memory reasons, we used a batch size of 2 for the end-to-end trained neural network. |