T-Rep: Representation Learning for Time Series using Time-Embeddings

Authors: Archibald Felix Fraikin, Adrien Bennetot, Stephanie Allassonniere

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate T-Rep on downstream classification, forecasting, and anomaly detection tasks. It is compared to existing self-supervised algorithms for time series, which it outperforms in all three tasks.
Researcher Affiliation Collaboration Archibald Fraikin Let it Care Pari Sant e Campus, Paris, France archibald.fraikin@inria.fr Adrien Bennetot Let it Care Pari Sant e Campus, Paris, France adrien.bennetot@letitcare.com St ephanie Allassonni ere Universit e Paris Cit e, INRIA, Inserm, SU Centre de Recherche des Cordeliers, Paris stephanie.allassonniere@inria.fr
Pseudocode No The paper describes the architecture and workflow of T-Rep through text and diagrams, but it does not include any formal pseudocode or algorithm blocks for its proposed method. Figure 5, labeled 'Hierarchical loss algorithm', is explicitly stated as taken from Yue et al. (2022).
Open Source Code Yes The code written to produce these experiments has been made publicly available1. 1https://github.com/Let-it-Care/T-Rep
Open Datasets Yes We perform two experiments, point-based anomaly detection on the Yahoo dataset (Nikolay Laptev, 2015), and segment-based anomaly detection on the 2019 Physio Net Challenge s Sepsis dataset (Reyna et al., 2020a; Goldberger et al., 2000). ... These models are evaluated on the UEA classification archive s 30 multivariate time series... Dau et al. (2019). ... We perform a multivariate forecasting task on the four public ETT datasets (ETTh1, ETTh2, ETTm1, ETTm2)... Zhou et al., 2021. ... In the first qualitative experiment, we visualize representations of incomplete time series using the Dodger Loop Game dataset from the UCR archive Dau et al. (2019)... Secondly, we decided to perform a more quantitative experiment, examining classification accuracy for different amounts of missing data, on the Articulary Word Recognition dataset of the UCR archive (Dau et al., 2019).
Dataset Splits Yes The train/validation/test split for the ETT datasets is 12/4/4 months. ... The penalty hyperparameter C is chosen through cross-validation, in the range t10k|k P rr 4, 4ssu.
Hardware Specification Yes We train all models on a single NVIDIA Ge Force RTX 3060 GPU with Cuda 11.7.
Software Dependencies Yes The implementation of the models is done in Python, using Pytorch 1.13 (Paszke et al., 2019) for deep learning and scikit-learn (Pedregosa et al., 2011) for SVMs, linear regressions, pre-processing etc. ... with Cuda 11.7.
Experiment Setup Yes We use a batch size of 16, a learning rate of 0.001, set the maximum number of epochs to 200 across all datasets and tasks. We use 10 residual blocks for the encoder network, and set hidden channel widths to 128. The kernel size of all convolution layers is set to 3. ... The penalty hyperparameter C is chosen through cross-validation, in the range t10k|k P rr 4, 4ssu.