TimesURL: Self-Supervised Contrastive Learning for Universal Time Series Representation Learning

Authors: Jiexi Liu, Songcan Chen

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, to evaluate the generality and the downstream tasks performance of the representation learned by our Times URL, we extensively experiment on 6 downstream tasks, including shortand long-term forecasting, imputation, classification, anomaly detection and transfer learning.
Researcher Affiliation Academia Jiexi Liu1,2, Songcan Chen1,2* 1College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics 2MIIT Key Laboratory of Pattern Analysis and Machine Intelligence {liujiexi, s.chen}@nuaa.edu.cn
Pseudocode No The paper describes the proposed methods using textual descriptions, mathematical formulations (e.g., Eq. 1-7), and diagrams (Figure 1), but does not include explicit pseudocode or algorithm blocks.
Open Source Code No The paper mentions 'Meng Cao for assisting with the implementation of the code' in the acknowledgments, but it does not provide an explicit statement about releasing the source code for the described methodology or a link to a code repository.
Open Datasets Yes We select commonly used UEA (Bagnall et al. 2018) and UCR (Dau et al. 2019) Classification Archive. [...] We compare models on two benchmark datasets, including KPI (Ren et al. 2019) a competition dataset that includes multiple minutely sampled real KPI curves and Yahoo (Nikolay Laptev 2015) including 367 hourly sampled time series.
Dataset Splits Yes To compare the model capacity under different proportions of missing data, we randomly mask the time points in the ratio of {12.5%, 25%, 37.5%, 50%}.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory specifications used for running experiments.
Software Dependencies No The paper mentions using 'Temporal Convolution Network (TCN) as the backbone encoder' but does not specify any software names with version numbers for libraries, frameworks, or other dependencies.
Experiment Setup Yes For Times URL, we use Temporal Convolution Network (TCN) as the backbone encoder, which is similar to TS2Vec (Yue et al. 2022). [...] we then follow the same protocol as TS2Vec which uses an SVM classifier with RBF kernel to train on top of representations for classification. [...] For short-term forecasting, the horizon is 24 and 48, while for long-term forecasting the horizon ranges from 96 to 720. [...] We follow the setting of a streaming evaluation protocol (Ren et al. 2019) in time series anomaly detection that determines whether the last point xt in time series slice x1, . . . , xt is an anomaly or not.