Diffusion Language-Shapelets for Semi-supervised Time-Series Classification

Authors: Zhen Liu, Wenbin Pei, Disen Lan, Qianli Ma

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments have been conducted on the UCR time series archive, and the results reveal that the proposed Diff Shape method achieves state-of-the-art performance and exhibits superior interpretability over baselines.Datasets. We used the UCR time series archive (Dau et al. 2019) to evaluate the proposed method. Similar to prior time series SSC work (Liu et al. 2023b), we selected 106 UCR time series datasets for our experiments.Main Results. As shown in Table 1, it is found that Diff Shape achieves the best classification performance under different labeling ratios on the 106 UCR time series datasets.
Researcher Affiliation Academia Zhen Liu1, Wenbin Pei2,3, Disen Lan1, Qianli Ma1* 1School of Computer Science and Engineering, South China University of Technology, Guangzhou, China 2School of Computer Science and Technology, Dalian University of Technology, Dalian, China 3Key Laboratory of Social Computing and Cognitive Intelligence (Dalian University of Technology), Ministry of Education cszhenliu@mail.scut.edu.cn, peiwenbin@dlut.edu.cn, 202130480657@mail.scut.edu.cn, qianlima@scut.edu.cn
Pseudocode Yes In addition, the pseudo-code for Diff Shape is presented in Algorithm 1 within the Appendix.
Open Source Code Yes The implementation of Diff Shape, along with the supplementary materials provided in the Appendix, can be accessed at https://github.com/qianlima-lab/Diff Shape.
Open Datasets Yes Datasets. We used the UCR time series archive (Dau et al. 2019) to evaluate the proposed method.
Dataset Splits Yes Following the suggestion given by Dau et al. (2019); Liu et al. (2023b), we adopted a five-fold cross-validation method, where the training-validation-test set ratio is set to 60%-20%-20% for each dataset.
Hardware Specification Yes We run experiments using Py Torch 1.10 on two NVIDIA Ge Force RTX 3090 GPUs.
Software Dependencies Yes We run experiments using Py Torch 1.10 on two NVIDIA Ge Force RTX 3090 GPUs.
Experiment Setup Yes The maximum epoch, the learning rate and the batch size are set to 1000, 1e-3 and 128, respectively. We set µdiff to 0.01, µlan to 0.001, sampling steps T to 10, and τ in Eq. (10) to 50. Like Liu et al. (2023b), we also use labeled data for warm-up training in the first 300 epochs.