Latent Independent Excitation for Generalizable Sensor-based Cross-Person Activity Recognition
Authors: Hangwei Qian, Sinno Jialin Pan, Chunyan Miao11921-11929
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experimental evaluations are conducted on three benchmark datasets to demonstrate the superiority of the proposed method over state-of-the-art solutions. Our experimental evaluations on three activity datasets validate that our model can outperform state-of-the-art methods with enhanced generalization capability. |
| Researcher Affiliation | Academia | Hangwei Qian, Sinno Jialin Pan, Chunyan Miao Nanyang Technological University, Singapore {hangwei.qian, sinnopan, ascymiao}@ntu.edu.sg |
| Pseudocode | No | The paper describes the model architecture and training process, accompanied by Figure 1, but it does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code and high-resolution figures are available at https://github.com/Hangwei12358/cross-person-HAR. |
| Open Datasets | Yes | We evaluate the proposed method on three large-scale wearable-sensor-based benchmark datasets. The UCIHAR dataset (Anguita et al. 2012) contains six daily activities... The Opportunity dataset (Chavarriaga et al. 2013) collects 4 participants daily activities... The Uni Mi B SHAR dataset (Micucci, Mobilio, and Napoletano 2016) records 9 types of activities... |
| Dataset Splits | Yes | We conduct experiments with leave-one-domain-out strategy in each dataset: one of the domains is treated as the unseen target domain, and the rest domains are considered as available source domains. To be fair, we set the proportion of training and test data in both settings to be identical. |
| Hardware Specification | No | The paper does not explicitly describe any specific hardware (e.g., GPU models, CPU types, memory amounts) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Pytorch (Paszke et al. 2019)' but does not provide a specific version number for Pytorch or any other key software dependencies. |
| Experiment Setup | Yes | The batch size is set to 64, and the maximum training epoch is set to 100. For all methods except Co DATS, the Adam optimizer with learning rate 10^-3 and weight decay 10^-3 is used. For Co DATS method, the learning rate is reduced to 10^-4 and the training epoch is set to 500. We tune parameters l {1, 2, 3} and h {32, 64, 128, 256, 512}. β is chosen from {0.002, 0.01, 0.1, 1, 5, 10, 100}. Also, α and γ are set to make three loss functions values in the similar order of magnitude. |