TSLANet: Rethinking Transformers for Time Series Representation Learning

Authors: Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Xiaoli Li

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our comprehensive experiments demonstrate that TSLANet outperforms state-of-the-art models in various tasks spanning classification, forecasting, and anomaly detection, showcasing its resilience and adaptability across a spectrum of noise levels and data sizes.
Researcher Affiliation Academia 1Centre for Frontier AI Research, Agency for Science, Technology and Research, Singapore 2I2R, Agency for Science, Technology and Research, Singapore. Correspondence to: Min Wu <wumin@i2r.a-star.edu.sg>.
Pseudocode Yes The full operation of the ASB is described in Algorithm 1 in the Appendix.
Open Source Code Yes The code is available at https://github.com/ emadeldeen24/TSLANet.
Open Datasets Yes We examine the classification ability of TSLANet on a total of 116 datasets, including 85 univariate UCR datasets (Dau et al., 2019), 26 multi-variate UEA datasets (Bagnall et al., 2018). We also include another 5 datasets, i.e., two biomedical datasets, namely, Sleep-EDF dataset (Goldberger et al., 2000) for EEG-based sleep stage classification and MIT-BIH dataset (Moody & Mark, 2001) for ECG-based arrhythmia classification, and three human activity recognition (HAR) datasets, namely, UCIHAR (Anguita et al., 2013), WISDM (Kwapisz et al., 2011), and HHAR (Stisen et al., 2015).
Dataset Splits Yes For the classification task, the UCR and UEA datasets are already split into train/test splits. A validation set was picked from each dataset in the training set with a ratio of 80/20. ... For biomedical and human activity recognition datasets, which are not split by default, we split the data into a 60/20/20 ratio for train/validation/test splits. For forecasting and anomaly detection datasets, these are split into a ratio of 70/10/20 following a line of previous works, towards a fair comparison with these works (Zhou et al., 2022; Kitaev et al., 2020; Li et al., 2021; Wu et al., 2023).
Hardware Specification Yes TSLANet was implemented using Py Torch and conducted on NVIDIA RTX A6000 GPUs.
Software Dependencies No The paper states, 'TSLANet was implemented using Py Torch', but it does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes To train the classification experiments, we optimized TSLANet using Adam W with a learning rate of 1e-3 and a weight decay of 1e-4, applied during both training and pretraining phases. The experiments ran for 50 epochs for pretraining and 100 epochs for fine-tuning. For the forecasting and anomaly detection experiments, we utilized a learning rate of 1e-4 and a weight decay of 1e-6, with both phases running for 10 and 20 epochs.