Contrastive Learning Is Not Optimal for Quasiperiodic Time Series
Authors: Adrian Atienza, Jakob Bardram, Sadasivan Puthusserypady
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed model has undergone extensive simulation studies to evaluate its performance. |
| Researcher Affiliation | Academia | Adrian Atienza , Jakob Bardram , Sadasivan Puthusserypady Department of Health Technology, Technical University of Denmark {adar, jakba, sapu}@dtu.dk |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or a link to open-source code for the methodology described. |
| Open Datasets | Yes | The model is trained with 10 second-length signals belonging to the Sleep Heart Health Study (SHHS) dataset [Zhang et al., 2018], [Quan et al., 1998]. ... All used databases are publicly available in Physionet [Goldberger et al., 2000] and National Sleep Research Resource (NSRR). |
| Dataset Splits | Yes | We conducted a five-fold cross-validation to evaluate the performance of the downstream tasks." and "We used the dataset s predefined partitioning of train and validation sets for evaluating the SVC model fitted on top of the representations." and "In the second, we have conducted a Leave-One Out (LOO) validation across the 23 MIT-AFIB subjects. |
| Hardware Specification | Yes | The training procedure and the evaluations are performed on a local computer, with a Nvidia Ge Force RTX 3070 GPU. |
| Software Dependencies | No | The paper mentions using Adam optimizer but does not specify versions for other software dependencies like programming languages or libraries. |
| Experiment Setup | Yes | The input data is a time series of 1000 samples, which correspond to 10 seconds-length signal sampled at 100Hz. This input is split into segments of a length of 20 samples. The model counts with 6 regular transformer blocks with 4 heads each. The model dimension is set to 128... The projectors and predictors in our approach are implemented as a two-layer Multilayer Perceptron (MLP) with a dimensionality of 512 and 256... The EMA updating factor (τ) is set to 0.995. The window size is set to 2 minutes. We weigh the covariance loss with a factor of 0.1. We optimize the most important 32 features during the selective optimization. The training procedure consists of 30,000 iterations. We use a batch size of 256, and Adam... with a learning rate of 3e 4 and a weight decay of 1.5e 6 as the optimizer. |