Utilizing Expert Features for Contrastive Learning of Time-Series Representations

Authors: Manuel T Nonnenmacher, Lukas Oldenburg, Ingo Steinwart, David Reeb

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we demonstrate on three real-world time-series datasets that Exp CLR surpasses several state-of-the-art methods for both unsupervised and semi-supervised representation learning.
Researcher Affiliation Collaboration 1Bosch Center for Artificial Intelligence (BCAI), Robert Bosch Gmb H, Renningen, Germany 2Institute for Stochastics and Applications, University of Stuttgart, Stuttgart, Germany.
Pseudocode No The paper contains mathematical formulations and descriptions of algorithms but does not include any explicitly labeled pseudocode blocks or algorithm listings.
Open Source Code Yes Py Torch code implementing our method is provided at https://github.com/boschresearch/expclr.
Open Datasets Yes Human Activity Recognition (HAR): The HAR dataset (Cruciani et al., 2019)... We downloaded the dataset from the UCI Machine Learning Repository (https://archive.ics.uci.edu/ml/datasets/human+activity+recognition+using+smartphones)... Sleep Stage Classification (Sleep EDF): The dataset originates from (Goldberger et al., 2000; Kemp et al., 2000)... We downloaded the dataset from the Pyhsio Net database (https://physionet.org/content/sleep-edf/1.0.0)... MIT-BIH Atrial Fibrillation (Waveform): This dataset (Goldberger et al., 2000)... We downloaded the data from the Pyhsio Net database (https://physionet.org/content/afdb/1.0.0)...
Dataset Splits Yes While for hyperparameter optimization we split the training set X into 80% training and 20% validation data, for our comparisons experiments we make use of the full training set and evaluate the representations on the test set.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions 'Py Torch code implementing our method' but does not specify the version of PyTorch or any other software dependencies with version numbers.
Experiment Setup Yes To capture relevant temporal properties and to improve training stability (Bai et al., 2018), we choose as a base encoder temporal convolutional network (TCN) (Lea et al., 2017) layers in a Res Net (He et al., 2016) architecture with eight such temporal blocks. For the optimization step we used the Adam optimizer with parameters β1 = 0.9, β2 = 0.999 and exponential decay γ = 0.99. To enable a fair comparison between Exp CLR and the competing methods, we optimize the learning rate for each method and dataset individually via a grid search and identify τ = 1, = 1 (Eq. 4), embedding dimension e = 100 and batch size of 64 as a good compromise over all datasets and algorithms.