An Unsupervised Approach for Periodic Source Detection in Time Series

Authors: Berken Utku Demirel, Christian Holz

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments in three time series tasks against state-of-the-art learning methods show that the proposed approach consistently outperforms prior works, achieving performance improvements of more than 45 50%, showing its effectiveness.
Researcher Affiliation Academia Department of Computer Science, ETH Zurich, Switzerland.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code: https://github.com/eth-siplab/ Unsupervised Periodicity Detection
Open Datasets Yes We conducted experiments on eight datasets across three tasks, including heart rate (HR) estimation from electrocardiogram (ECG) and photoplethysmography (PPG) signals, step counting using inertial measurements (IMUs), and respiratory rate (breathing) estimation from PPG signals. ... PTB-XL (Wagner et al., 2020), WESAD (Schmidt et al., 2018), and Da Li A (Reiss et al., 2019) ... IEEE SPC12 with 22 (Zhang et al., 2015) ... Capno Base (Karlen et al., 2013) ... BIDMC (Pimentel et al., 2017) ... Clemson dataset (Mattfeld et al., 2017)
Dataset Splits Yes PTB-XL: The dataset itself provides recommended splits into training and test sets. We, therefore, follow the exact recommendation. ten percent of the training training set is used as fine-tuning for self-supervised learning techniques. ... Da Lia ECG: We follow leave-one-out-cross-validation for each subject. ... WESAD: We evaluate the dataset using leave-one-subject-out. ... The model selection is performed on the validation sets with the lowest loss, where the validation set is created by randomly splitting 10% of remaining data after excluding the test subjects.
Hardware Specification Yes we calculated the inference time on a computer equipped with an Intel Core i7-10700k CPU running at 3.80 GHz, 32 GB RAM. Second, we deployed our model on a MAX78000 AI Accelerator, which has previously implemented U-Net architectures (Moss et al., 2023).
Software Dependencies No The paper mentions software like PyTorch, MATLAB, and optimizers like Adam, but it does not provide specific version numbers for these software components, which is required for reproducible description of ancillary software.
Experiment Setup Yes Specifically, we use a combination of convolutional with LSTM-based network, which shows superior performance in many time series tasks (Qian et al., 2022; Biswas et 2019), as backbones for the encoder fθ(.) where the projector is two fully-connected layers. During pre-training, we use Info NCE (for contrastive learning-based methods) as the loss function, which is optimized using Adam (Kingma & Ba, 2015) with a learning rate of 0.003. We train the models with a batch size of 256 for 120 epochs and decay the learning rate using the cosine decay.