Cross-Domain Contrastive Learning for Time Series Clustering

Authors: Furong Peng, Jiachen Luo, Xuan Lu, Sheng Wang, Feijiang Li

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments and visualization analysis are conducted on 40 time series datasets from UCR, demonstrating the superior performance of the proposed model.
Researcher Affiliation Academia Furong Peng1, 2, Jiachen Luo1, 2, Xuan Lu3*, Sheng Wang4, Feijiang Li1, 2 1 Institute of Big Data Science and Industry, Shanxi University 2 School of Computer and Information Technology, Shanxi University 3 College of Physics and Electronic Engineering, Shanxi University 4 School of Automation, Zhengzhou University of Aeronautics
Pseudocode No The paper describes the model architecture and mathematical formulations, but it does not include structured pseudocode or algorithm blocks.
Open Source Code Yes 2https://github.com/Jiac Luo/CDCC
Open Datasets Yes experiments were conducted on 40 time series datasets from UCR1 (Dau et al. 2019).
Dataset Splits No The training and testing sets from the UCR were merged for evaluation. The paper mentions training and testing sets, but does not specify a separate validation split or explicit cross-validation methodology.
Hardware Specification Yes The experiments were conducted on a DCU Z100SM (16GB) computing card using Py Torch environment.
Software Dependencies No The experiments were conducted on a DCU Z100SM (16GB) computing card using Py Torch environment. While PyTorch is mentioned, a specific version number is not provided, nor are other software dependencies with version numbers.
Experiment Setup Yes In CDCC, τ I = 0.5, and τ C = 1. The learning rate, the number of layers in Bi LSTM, batch size and the dropout rate was searched. [...] We set α = 0.8, β = 1.1, and γ = 0.8. [...] In our experiments, we set ω = 0.1, θ = 0.1, and ϵ = 0.1.