Time-Series Representation Learning via Temporal and Contextual Contrasting

Authors: Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee Keong Kwoh, Xiaoli Li, Cuntai Guan

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments have been carried out on three real-world time-series datasets. The results manifest that training a linear classifier on top of the features learned by our proposed TS-TCC performs comparably with the supervised training. Additionally, our proposed TS-TCC shows high efficiency in few-labeled data and transfer learning scenarios.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Nanyang Technological University, Singapore 2Institute for Infocomm Research, A*STAR, Singapore
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The code is publicly available at https://github.com/emadeldeen24/TS-TCC.
Open Datasets Yes To evaluate our model, we adopted three publicly available datasets for human activity recognition, sleep stage classification and epileptic seizure prediction, respectively. Additionally, we investigated the transferability of our learned features on a fault diagnosis dataset. Human Activity Recognition (HAR) We use UCI HAR dataset [Anguita et al., 2013]... Sleep Stage Classification We downloaded Sleep-EDF dataset from the Physio Bank [Goldberger et al., 2000]... Epilepsy Seizure Prediction The Epileptic Seizure Recognition dataset [Andrzejak et al., 2001]... Fault Diagnosis (FD) We conducted the transferability experiment on a real-world fault diagnosis dataset [Lessmeier et al., 2016].
Dataset Splits Yes We split the data into 60%, 20%, 20% for training, validation and testing, with considering subject-wise split for Sleep-EDF dataset to avoid overfitting.
Hardware Specification Yes Lastly, we built our model using Py Torch 1.7 and trained it on a NVIDIA Ge Force RTX 2080 Ti GPU.
Software Dependencies Yes Lastly, we built our model using Py Torch 1.7 and trained it on a NVIDIA Ge Force RTX 2080 Ti GPU.
Experiment Setup Yes We applied a batch size of 128 (which was reduced to 32 in few-labeled data experiments as data size may be less than 128). We used Adam optimizer with a learning rate of 3e-4, weight decay of 3e-4, β1 = 0.9, and β2 = 0.99. For the strong augmentation, we set MHAR = 10, MEp = 12 and MEDF = 20, while for the weak augmentation, we set the scaling ratio to 2 for all the datasets. We set λ1 = 1, while we achieved good performance when λ2 1. Particularly, we set it as 0.7 in our experiments on the four datasets. In the Transformer, we set the L = 4, and the number of heads as 4. We tuned h {32, 50, 64, 100, 128, 200, 256} and set h HAR,Ep = 100, h EDF = 64. We also set its dropout to 0.1. In contextual contrasting, we set τ = 0.2.