Towards Enhancing Time Series Contrastive Learning: A Dynamic Bad Pair Mining Approach

Authors: Xiang Lan, Hanshu Yan, Shenda Hong, Mengling Feng

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments conducted on four large-scale, real-world time series datasets, we demonstrate DBPM s efficacy in mitigating the adverse effects of bad positive pairs.
Researcher Affiliation Collaboration 1Saw Swee Hock School of Public Health & Institute of Data Science, National University of Singapore 2Byte Dance, 3National Institute of Health Data Science, Peking University
Pseudocode Yes A.1 DBPM ALGORITHM Algorithm 1: Dynamic Bad Pair Mining
Open Source Code Yes Corresponding authors Codes are available at Git Hub
Open Datasets Yes Our model is evaluated on four real-world benchmark time series datasets: PTB-XL (Wagner et al., 2020)... HAR (Anguita et al., 2013)... Sleep-EDF (Goldberger et al., 2000)... Epilepsy (Andrzejak et al., 2001)...
Dataset Splits Yes Table 3: Summarization of PTB-XL. Task Diagnostic Classification Train 13,715 Val 3,429 Test 4,286 and Table 4: Summarization of HAR, Epilepsy, and Sleep-EDF. Dataset HAR Train 5,881 Val 1,471 Test 2,947
Hardware Specification Yes Experiments are conducted using Py Torch 1.11.0 (Paszke et al., 2019) on a NVIDIA A100 GPU.
Software Dependencies Yes Experiments are conducted using Py Torch 1.11.0 (Paszke et al., 2019) on a NVIDIA A100 GPU.
Experiment Setup Yes The Adam optimizer (Kingma & Ba, 2015) with a fixed learning rate of 0.001 is used to optimize the linear classifier for all datasets. The temperature t is set to 0.2 for the contrastive loss defined in Eq.1.