A Novel Distribution-Embedded Neural Network for Sensor-Based Activity Recognition
Authors: Hangwei Qian, Sinno Jialin Pan, Bingshui Da, Chunyan Miao
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on four datasets to demonstrate the effectiveness of our proposed method compared with state-of-the-art baselines. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering 2Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly 3Interdisciplinary Graduate School Nanyang Technological University, Singapore |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code of the proposed DDNN is available at https://github.com/Hangwei12358/IJCAI2019 DDNN. |
| Open Datasets | Yes | We conduct experiments on four sensor-based activity datasets... The Daphnet Gait dataset (DG) [B achlin et al., 2010]... The Opportunity dataset (OPPOR) [Chavarriaga et al., 2013]... The UCIHAR dataset [Anguita et al., 2012]... The PAMAP2 dataset [Reiss and Stricker, 2012] |
| Dataset Splits | Yes | We randomly split activities into training set {(Xi, yi)}n i=1, validation set {(Xj, yj)}m j=1 and test set {Xt}p t=1... run 2 from subject 1 as validation set, runs 4 and 5 from subject 2 and 3 as test set and the rest as training set. |
| Hardware Specification | Yes | All experiments are run on a Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions the use of Adam optimizer and ReLU activation functions but does not specify any software or library versions (e.g., Python, PyTorch, TensorFlow, scikit-learn versions). |
| Experiment Setup | Yes | The batch size is set to 64, and the maximum training epoch is 100. Adam optimizer is used for training with learning rate 10 3 and weight decay 10 3. Both LSTMs in spatial and temporal module have l layers of LSTMs with h-dimensional hidden representations, where l {1, 2, 3} and h {32, 64, 128, 256, 512, 1024}. Four convolutional layers with filter size (1, 5) are utilized in the temporal module. |