Multi-modality Sensor Data Classification with Selective Attention
Authors: Xiang Zhang, Lina Yao, Chaoran Huang, Sen Wang, Mingkui Tan, Guodong Long, Can Wang
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We carry out several experiments on three wearable sensor datasets and demonstrate the competitive performance of the proposed approach compared to several state-of-the-art baselines. |
| Researcher Affiliation | Academia | School of Computer Science and Engineering, University of New South Wales School of Information and Communication Technology, Griffith University School of Software Engineering, South China University of Technology Center for Quantum Computation and Intelligent Systems, University of Technology Sydney |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. Figure 1 shows a flowchart, but not a formal algorithm. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | PAMAP2. The PAMAP2 [Fida et al., 2015] is collected by 9 participants (8 males and 1 females) aged 27 3. 8 ADLs are selected as a subset of our paper. The activity is measured by 1 IMU attached to the participants wrist. |
| Dataset Splits | No | All the three datasets are randomly split into the training set (90%) and the testing set (10%). |
| Hardware Specification | Yes | F needs around 4000 sec on the Titan X (Pascal) GPU for each step while the whole focal zone optimization contains N (N > 2000) iterations. |
| Software Dependencies | No | The paper mentions software components like Adam Optimizer and Dueling DQN, but does not specify their version numbers or the versions of any programming languages or libraries used. |
| Experiment Setup | Yes | Through the previous experimental tuning and the Orthogonal Array based hyper-parameters tuning method [Zhang et al., 2017], the hyper-parameters are set as following. In the selective attention learning: the order of autoregressive is 3; K = 128, the Dueling DQN has 4 lays and the node number in each layer are: 2 (input layer), 32 (FCL), 4 (A(st, at)) + 1 (V (st)), 4 (output). The decay parameter γ = 0.8, ne = ns = 50, N = 2, 500, ϵ = 0.2, learning rate= 0.01, memory size = 2000, length penalty coefficient β = 0.1, and the minimum length of focal zone is set as 10. In the deep learning classifier: the node number in the input layer equals to the number of feature dimensions, three hidden layers with 164 nodes, two layers of LSTM cells and one output layer. The learning rate = 0.001, ℓ2-norm coefficient λ = 0.001, forget bias = 0.3, batch size = 9, and iterate for 1000 iterations. |