TS2Vec: Towards Universal Representation of Time Series

Authors: Zhihan Yue, Yujing Wang, Juanyong Duan, Tianmeng Yang, Congrui Huang, Yunhai Tong, Bixiong Xu8980-8987

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on time series classification tasks to evaluate the quality of time series representations. As a result, TS2Vec achieves significant improvement over existing SOTAs of unsupervised time series representation on 125 UCR datasets and 29 UEA datasets. The learned timestamp-level representations also achieve superior results in time series forecasting and anomaly detection tasks.
Researcher Affiliation Collaboration Zhihan Yue,1,2 Yujing Wang,1,2 Juanyong Duan,2 Tianmeng Yang,1,2 Congrui Huang,2 Yunhai Tong,1 Bixiong Xu2 1 Peking University, 2 Microsoft {zhihan.yue,youngtimmy,yhtong}@pku.edu.cn {yujwang,juanyong.duan,conhua,bix}@microsoft.com
Pseudocode Yes Algorithm 1: Calculating the hierarchical contrastive loss
Open Source Code Yes The source code is publicly available at https://github.com/yuezhihan/ts2vec.
Open Datasets Yes We conduct extensive experiments on time series classification to evaluate the instance-level representations, compared with other SOTAs of unsupervised time series representation, including T-Loss, TS-TCC (Eldele et al. 2021), TST (Zerveas et al. 2021) and TNC (Tonekaboni, Eytan, and Goldenberg 2021). The UCR archive (Dau et al. 2019) and UEA archive (Bagnall et al. 2018) are adopted for evaluation.
Dataset Splits No The paper mentions training, testing, and evaluation on datasets but does not explicitly state the specific train/validation/test splits (e.g., percentages or counts) or cross-validation setup for reproducibility in the main text. It refers to "standard evaluation protocols" in the appendix.
Hardware Specification Yes Table 1 also shows the total training time of representation learning methods with an NVIDIA Ge Force RTX 3090 GPU.
Software Dependencies No The paper does not explicitly state software dependencies with specific version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup No The paper states, "Detailed experimental settings are presented in the appendix." but does not provide specific hyperparameters or training details in the main text.