DWLR: Domain Adaptation under Label Shift for Wearable Sensor

Authors: Juren Li, Yang Yang, Youmin Chen, Jianfeng Zhang, Zeyu Lai, Lujia Pan

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three distinct wearable sensor datasets demonstrate the effectiveness of DWLR, yielding a remarkable average performance improvement of 5.85%.
Researcher Affiliation Collaboration Juren Li1 , Yang Yang1 , Youmin Chen1 , Jianfeng Zhang2 , Zeyu Lai1 and Lujia Pan2 1College of Computer Science and Technology, Zhejiang University 2Huawei Noah s Ark Lab {jrlee, yangya, youminchen, jerrylai}@zju.edu.cn, {zhangjianfeng3, panlujia}@huawei.com
Pseudocode No The paper describes its methods using prose and mathematical equations but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Supplementary material is available at https://github.com/ Ju Ren Github/DWLR.
Open Datasets Yes In our experiments, we use three real-world wearable sensor datasets (i.e., WISDM [Kwapisz et al., 2011], UCIHAR [Anguita et al., 2013] and HHAR [Stisen et al., 2015]). And we also conduct experiment on a human sensor dataset, Sleep EDF [Goldberger et al., 2000].
Dataset Splits No The paper mentions training on source data and testing on target data, but it does not explicitly provide training/validation splits or refer to a specific validation set size or methodology.
Hardware Specification No The paper describes the experimental setup and parameters but does not specify the exact hardware components (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper mentions the use of an AdamW optimizer but does not provide specific version numbers for software dependencies or libraries (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For our model, the α is set to 0.5 and β is set to 2.0. For WISDM, HHAR and UCIHAR the feature extractor and classifier are pretrained for 10 epochs. For Sleep EDF, the pretrain epoch is set to 50, and we also adjust the baseline pretrain epoch if pretrain is required. The batch size is set to 256. We employ the Adam W optimizer with a learning rate of 1e-3 and a weight decay of 1e-3.