Soft Contrastive Learning for Time Series

Authors: Seunghan Lee, Taeyoung Park, Kibok Lee

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments in various tasks, including TS classification, semi-supervised classification, transfer learning, and anomaly detection tasks to prove the effectiveness of the proposed method. Experimental results validate that our method improves the performance of previous CL methods, achieving state-of-the-art (SOTA) performance on a range of downstream tasks.
Researcher Affiliation Academia Seunghan Lee, Taeyoung Park, Kibok Lee Department of Statistics and Data Science, Yonsei University
Pseudocode No The paper describes the methodology using text and mathematical equations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code is available at this repository: https://github. com/seunghan96/softclt.
Open Datasets Yes We conduct experiments on TS classification tasks with 125 UCR archive datasets (Dau et al., 2019) for univariate TS and 29 UEA archive datasets (Bagnall et al., 2018) for multivariate TS... We evaluate the compared method on the Yahoo (Laptev et al., 2015) and KPI (Ren et al., 2019) datasets.
Dataset Splits Yes Table A.1 describes the summary of the statistical information for eight datasets... including the number of training and testing samples... Table A.2 describes the datasets for in-domain and cross-domain transfer learning... sample size denoted by A/B/C, where each denotes the number of samples used for fine-tuning, validation, and testing, respectively.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper provides hyperparameter settings and training iterations but does not list specific software dependencies or their version numbers (e.g., Python, PyTorch, TensorFlow versions) required to replicate the experiment.
Experiment Setup Yes The table of hyperparameter settings that we utilized can be found in Table C.1. We made use of five hyperparameters: τI, τT , λ, batch size (bs), and learning rate (lr). For semi-supervised classification and transfer learning, we set the weight decay to 3e-4, β1 = 0.9, and β2 = 0. The number of optimization iterations for classification and anomaly detection tasks is set to 200 for datasets with a size less than 100,000; otherwise, it is set to 600. Additionally, the training epochs for semi-supervised classification are set to 80, while for transfer learning, it is set to 40.