PD Disease State Assessment in Naturalistic Environments Using Deep Learning
Authors: Nils Hammerla, James Fisher, Peter Andras, Lynn Rochester, Richard Walker, Thomas Ploetz
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Based on a large data-set collected from 34 people with PD we illustrate that deep learning outperforms other approaches in generalisation performance, despite the unreliable labelling characteristic for this problem setting, and how such systems could improve current clinical practice. |
| Researcher Affiliation | Academia | Nils Y. Hammerla Culture Lab, Digital Interaction Group Newcastle University, UK nils.hammerla@ncl.ac.uk James M. Fisher Health Education North East, UK Peter Andras School of Computing and Mathematics Keele University, UK Lynn Rochester Institute of Neuroscience Newcastle University, UK Richard Walker Northumbria Healthcare NHS Foundation Trust, UK Thomas Pl otz Culture Lab, Digital Interaction Group Newcastle University, UK |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper provides a link for the dataset but does not state that the source code for the methodology is available or provide a link to it. |
| Open Datasets | Yes | Based on a large data-set collected from 34 people with PD we illustrate that deep learning outperforms other approaches in generalisation performance, despite the unreliable labelling characteristic for this problem setting, and how such systems could improve current clinical practice. 1Data-set is available at http://di.ncl.ac.uk/naturalistic PD. |
| Dataset Splits | Yes | To minimise the effect of large pairwise similarity of subsequent minutes of recording we follow a leave-one-day-out cross validation approach, where e.g. the first day of recording from all patients constitutes a fold. The second setting simulates best practice for assessment systems in PD, where the smaller but clinician validated LAB data-set is used for training in a stratified 7-fold cross validation which is subsequently applied to the HOME data-set to assess generalisation performance. |
| Hardware Specification | No | The paper mentions that training takes 'around one day per fold on a GPU' but does not specify the model or detailed specifications of the GPU or other hardware components. |
| Software Dependencies | No | The paper mentions general machine learning models and techniques but does not provide specific version numbers for software dependencies or libraries used. |
| Experiment Setup | Yes | Learning rates were set to 10 4 for the gaussian-binary RBM, and 10 3 for the binary-binary RBM, with a momentum of 0.9 and a weight-cost of 10 5. Each RBM is trained for 500 epochs with batches containing 500 samples. In the subsequent fine-tuning phase we add a top-layer (randomly initialised, σ = 0.01) to the generative model. This top-layer contains 4 units in a softmax group that correspond to our 4 classes of interest: asleep, off, on, and dyskinetic. Using the labels for each input frame we perform 250 epochs of conjugate gradients with batches that gradually increase in size from 256 up to 2,048 (stratified) samples. |