Deep, Convolutional, and Recurrent Models for Human Activity Recognition Using Wearables

Authors: Nils Y. Hammerla, Shane Halloran, Thomas Plötz

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper we rigorously explore deep, convolutional, and recurrent approaches across three representative datasets that contain movement data captured with wearable sensors. We describe how to train recurrent approaches in this setting, introduce a novel regularisation approach, and illustrate how they outperform the state-of-the-art on a large benchmark dataset. We investigate the suitability of each model for HAR, across thousands of recognition experiments with randomly sampled model configurations, explore the impact of hyperparameters using the f ANOVA framework, and provide guidelines for the practitioner who wants to apply deep learning in their problem setting.
Researcher Affiliation Collaboration Nils Y. Hammerla1,2, Shane Halloran2, Thomas Pl otz2 1babylon health, London, UK 2Open Lab, School of Computing Science, Newcastle University, UK
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code available at https://github.com/nhammerla/deep HAR
Open Datasets Yes We select three datasets typical for HAR in ubicomp for the exploration in this work. Each dataset corresponds to a different application of HAR. The first dataset, Opportunity, contains manipulative gestures like opening and closing doors, which are short in duration and non-repetitive. The second, PAMAP2, contains prolonged and repetitive physical activities typical for systems aiming to characterise energy expenditure. The last, Daphnet Gait, corresponds to a medical application where participants exhibit a typical motor complication in Parkinson s disease that is known to have a large inter-subject variability. Below we detail each dataset: The Opportunity dataset (Opp) [Chavarriaga et al., 2013], The PAMAP2 dataset [Reiss and Stricker, 2012], The Daphnet Gait dataset (DG) [Bachlin et al., 2009]
Dataset Splits Yes Opportunity: We use run 2 from subject 1 as our validation set, and replicate the most popular recognition challenge by using runs 4 and 5 from subject 2 and 3 in our test set. The remaining data is used for training. PAMAP2: We used runs 1 and 2 for subject 5 in our validation set and runs 1 and 2 for subject 6 in our test set. The remaining data is used for training. Daphnet Gait: We used run 1 from subject 9 in our validation set, runs 1 and 2 from subject 2 in our test set, and used the rest for training.
Hardware Specification Yes Experiments were run on a machine with three GPUs (NVidia GTX 980 Ti), where two model configurations are run on each GPU except for the largest networks.
Software Dependencies No The paper mentions machine learning frameworks like Torch7 as being accessible for deep learning, but does not provide specific version numbers for the software dependencies used in their experiments.
Experiment Setup Yes The different hyper-parameters explored in this work are listed in table 1. Each model is trained for at least 30 epochs and for a maximum of 300 epochs. After 30 epochs, training stops if there is no increase in validation performance for 10 subsequent epochs. The DNN is trained in a mini-batch approach, where each mini-batch contains 64 frames and is stratified with respect to the class distribution in the training-set.