SEnsor Alignment for Multivariate Time-Series Unsupervised Domain Adaptation

Authors: Yucheng Wang, Yuecong Xu, Jianfei Yang, Zhenghua Chen, Min Wu, Xiaoli Li, Lihua Xie

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results demonstrate the state-of-the-art performance of our proposed SEA on two public MTS datasets for MTS-UDA. The code is available at https://github.com/Frank-Wang-oss/SEA. Experimental Results To evaluate the effectiveness of SEA, we test our model on two public datasets, C-MAPSS for remaining useful life prediction (Saxena et al. 2008) and Opportunity HAR for human activity recognition (Roggen et al. 2010).
Researcher Affiliation Collaboration 1Nanyang Technological University, Singapore 2Institute for Infocomm Research, A*STAR, Singapore 3Centre for Frontier AI Research, A*STAR, Singapore {yucheng003, xuyu0014, yang0478, chen0832}@e.ntu.edu.sg, {wumin, xlli}@i2r.a-star.edu.sg, elhxie@ntu.edu.sg
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes The code is available at https://github.com/Frank-Wang-oss/SEA
Open Datasets Yes To evaluate the effectiveness of SEA, we test our model on two public datasets, C-MAPSS for remaining useful life prediction (Saxena et al. 2008) and Opportunity HAR for human activity recognition (Roggen et al. 2010).
Dataset Splits No To construct the training dataset, we adopt a sliding window with a size of 128 and an overlapping of 50% as (Ragab et al. 2022a) did. The paper does not explicitly state train/validation/test dataset splits with percentages or counts within its text.
Hardware Specification Yes Furthermore, we built and trained our model based on Pytorch 1.9 and NVIDIA Ge Force RTX 3080Ti GPU.
Software Dependencies Yes Furthermore, we built and trained our model based on Pytorch 1.9 and NVIDIA Ge Force RTX 3080Ti GPU.
Experiment Setup Yes Besides, we set batch size as 50, optimizer as Adam, learning rate as 0.001, and training epoch as 10 for training our model.