Instance-Wise Dynamic Sensor Selection for Human Activity Recognition

Authors: Xiaodong Yang, Yiqiang Chen, Hanchao Yu, Yingwei Zhang, Wang Lu, Ruizhe Sun1104-1111

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate the performance of IDSS, we conduct experiments on three real-world HAR datasets. The experimental results show that IDSS can effectively reduce the overall sensor number without losing accuracy and outperforms the state-of-the-art methods regarding the combined measurement of accuracy and sensor number.
Researcher Affiliation Academia 1The Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China, 100190 2University of Chinese Academy of Sciences, Beijing, China, 100049 3Peng Cheng Laboratory, Shenzhen, China, 518055 {yangxiaodong, yqchen, yuhanchao, zhangyingwei, luwang, sunruizhe18s}@ict.ac.cn
Pseudocode Yes Algorithm 1 IDSS Learning with Mutual DAgger and Algorithm 2 IDSS Inference
Open Source Code No The paper does not contain any statement about releasing code or a link to a repository for the described methodology.
Open Datasets Yes MHEALTH (Banos et al. 2015): 10 subjects worn a 3axis accelerometer, a 3-axis gyroscope and a 3-axis magnetometer on the right wrist and left ankle, and a 3-axis accelerometer on the chest. PAMAP2 (Reiss and Stricker 2012): 8 subjects worn a 3-axis accelerometer, a 3-axis gyroscope and a 3-axis magnetometer on the chest and the dominant side s wrist and ankle. Activity Net: 8 subjects worn a 3-axis accelerometer on the chest, two wrists and two ankles.
Dataset Splits Yes We conduct the experiments on leave-one-out cross-validation, where one of the subjects is selected for testing and the others for training.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions that 'Both the NNs are optimized by ADAM solvers' and 'We use the Neural Network (NN) to build the classification model and the policy function in the MDP', but does not provide specific software names with version numbers for libraries or frameworks used.
Experiment Setup Yes To capture the activities, 1-second sliding windows with none overlay are used to segment the data stream and all the numeric values are normalized into [ 1, 1]. For each axis of sensors, we extract 13 statistic temporal domain features for φ( x), e.g., mean, variance, deviation. We use the Neural Network (NN) to build the classification model and the policy function in the MDP. Those features from unselected sensors are imputed by zeros. Using a sparse feature vector also improves classification efficiency at test time. Both the NNs are optimized by ADAM solvers. We conduct the experiments on leave-one-out cross-validation, where one of the subjects is selected for testing and the others for training. The iteration round Max Iter for training is set to 10.