AttnSense: Multi-level Attention Mechanism For Multimodal Human Activity Recognition
Authors: HaoJie Ma, Wenzhong Li, Xiao Zhang, Songcheng Gao, Sanglu Lu
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments based on three public datasets show that Attn Sense achieves a competitive performance in activity recognition compared with several state-of-the-art methods. |
| Researcher Affiliation | Academia | Haojie Ma, Wenzhong Li, Xiao Zhang, Songcheng Gao and Sanglu Lu State Key Laboratory for Novel Software Technology, Nanjing University mhj1137633684@gmail.com, lwz@nju.edu.cn, sanglu@nju.edu.cn |
| Pseudocode | No | The paper describes the model architecture and equations but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper. |
| Open Datasets | Yes | We evaluate Attn Sense on three public HAR datasets. These datasets are recorded in different contexts by either worn or embedded into objects that subjects manipulated. The statistics of the three datasets are depicted in Table 1. The first dataset is Heterogeneous [Stisen et al., ]. The second dataset is Skoda [Stiefmeier et al., 2008]. The third dataset is PAMAP2 [Reiss and Stricker, 2012]. |
| Dataset Splits | Yes | We preprocess the dataset as described and use the whole data from participant 1 for testing, and the rest of the dataset for training. ... We preprocess the dataset as described and use 10% of the data in each class for testing, and the rest 90% data for training. ... We preprocess the dataset as described and use the whole data from participant 6 for testing, and the rest of the dataset for training. Moreover, we perform 4-fold cross-validation for Skoda dataset and leave-one-subject-out validation for Heterogeneous and PAMAP2 dataset to obtain the best model configuration. |
| Hardware Specification | Yes | We build our model using Tensor Flow and train it on a GPU GTX 1070ti. |
| Software Dependencies | No | The paper states "We build our model using Tensor Flow" but does not provide a specific version number for TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | The batch size is set to 64, and the network is optimized using Rmsprop with learning rate 0.0001. The best numbers of convolutional layers for Heterogeneous, Skoda and PAMAP2 dataset are 3, 2, and 4 accordingly. ... We get the best performance when using 20, 15, and 20 width sliding window for Heterogeneous, Skoda and PAMAP2 accordingly. |