Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Temporal-Frequency Co-training for Time Series Semi-supervised Learning
Authors: Zhen Liu, Qianli Ma, Peitian Ma, Linghao Wang
AAAI 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on 106 UCR datasets show that TSTFC outperforms state-of-the-art methods, demonstrating the effectiveness and robustness of our proposed model. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, South China University of Technology, Guangzhou, China 2Key Laboratory of Big Data and Intelligent Robot (South China University of Technology), Ministry of Education |
| Pseudocode | Yes | For details of TS-TFC training, please refer to Algorithm 1 in the Appendix. |
| Open Source Code | Yes | Our implementation of TS-TFC is available at https://github.com/qianlima-lab/TS-TFC. |
| Open Datasets | Yes | We conduct experiments utilizing the UCR time series archive (Dau et al. 2019), which is widely employed for time series classification studies (Ismail Fawaz et al. 2019). |
| Dataset Splits | Yes | As suggested by (Dau et al. 2019; Wang et al. 2019), we merge the original training and test sets, and then divide the train-validation-test set using a five-fold cross-validation method in the ratio of 60%-20%-20% for evaluation. |
| Hardware Specification | Yes | All experiments are repeated five times with five random seeds, and are conducted on Pytoch 1.10 platform with 2 NVIDIA Ge Force RTX 3090 GPUs. |
| Software Dependencies | Yes | All experiments are repeated five times with five random seeds, and are conducted on Pytoch 1.10 platform with 2 NVIDIA Ge Force RTX 3090 GPUs. |
| Experiment Setup | Yes | Adam is used as the optimizer, and the learning rate is 0.001. The maximum batch size is 1024, and the maximum epoch is 1000. The temperature coefficients ฯ in Eq. 2 and Eq. 3 are set to 50, the hyperparameters ฮฑ is set to 0.99 and 5. And top k in Eq. 4 for temporal and frequency encoder are set to 40 and 30, respectively. The fixed threshold ฮณ is set to 0.95. The hyperparameters ฮป and ยต are set to 0.05. Further, we employ labeled data for the warm-up training in the first 300 epochs, mitigating the learning bias of the model for unlabeled data. |