Temporal-Frequency Co-training for Time Series Semi-supervised Learning

Authors: Zhen Liu, Qianli Ma, Peitian Ma, Linghao Wang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on 106 UCR datasets show that TSTFC outperforms state-of-the-art methods, demonstrating the effectiveness and robustness of our proposed model.
Researcher Affiliation Academia 1School of Computer Science and Engineering, South China University of Technology, Guangzhou, China 2Key Laboratory of Big Data and Intelligent Robot (South China University of Technology), Ministry of Education
Pseudocode Yes For details of TS-TFC training, please refer to Algorithm 1 in the Appendix.
Open Source Code Yes Our implementation of TS-TFC is available at https://github.com/qianlima-lab/TS-TFC.
Open Datasets Yes We conduct experiments utilizing the UCR time series archive (Dau et al. 2019), which is widely employed for time series classification studies (Ismail Fawaz et al. 2019).
Dataset Splits Yes As suggested by (Dau et al. 2019; Wang et al. 2019), we merge the original training and test sets, and then divide the train-validation-test set using a five-fold cross-validation method in the ratio of 60%-20%-20% for evaluation.
Hardware Specification Yes All experiments are repeated five times with five random seeds, and are conducted on Pytoch 1.10 platform with 2 NVIDIA Ge Force RTX 3090 GPUs.
Software Dependencies Yes All experiments are repeated five times with five random seeds, and are conducted on Pytoch 1.10 platform with 2 NVIDIA Ge Force RTX 3090 GPUs.
Experiment Setup Yes Adam is used as the optimizer, and the learning rate is 0.001. The maximum batch size is 1024, and the maximum epoch is 1000. The temperature coefficients τ in Eq. 2 and Eq. 3 are set to 50, the hyperparameters α is set to 0.99 and 5. And top k in Eq. 4 for temporal and frequency encoder are set to 40 and 30, respectively. The fixed threshold γ is set to 0.95. The hyperparameters λ and µ are set to 0.05. Further, we employ labeled data for the warm-up training in the first 300 epochs, mitigating the learning bias of the model for unlabeled data.