Similarity Preserving Representation Learning for Time Series Clustering

Authors: Qi Lei, Jinfeng Yi, Roman Vaculin, Lingfei Wu, Inderjit S. Dhillon

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental By conducting extensive empirical studies, we show that the proposed framework is more effective, efficient, and flexible, compared to other state-of-the-art time series clustering methods. We conduct extensive experiments on all the 85 datasets in the UCR time series classification and clustering repository [Chen et al., 2015]
Researcher Affiliation Collaboration Qi Lei1 , Jinfeng Yi2 , Roman Vaculin3 , Lingfei Wu3 and Inderjit S. Dhillon4,1 1University of Texas at Austin 2JD AI Research 3IBM Research 4Amazon
Pseudocode Yes Algorithm 1 Efficient Exact Cyclic Coordinate Descent Algorithm for Solving the Optimization Problem (4)
Open Source Code Yes Our source code and the detailed experimental results are publicly available.2 2https://github.com/cecilialeiqi/SPIRAL
Open Datasets Yes We conduct extensive experiments on all the 85 datasets in the UCR time series classification and clustering repository [Chen et al., 2015]. The UCR time series classification archive, July 2015. www.cs.ucr.edu/ eamonn/time series data/.
Dataset Splits No Since data clustering is an unsupervised learning problem, we merge the training and testing sets of all the datasets.
Hardware Specification Yes All the results were averaged from 5 trials and obtained on a Linux server with an Intel Xeon 2.40 GHz CPU and 256 GB of main memory.
Software Dependencies No The paper does not specify particular software dependencies with version numbers (e.g., Python version, library versions) used for the implementation or experiments.
Experiment Setup Yes In our experiments, we set |Ω| = [20n log n], and # features d = 15. The convergence criteria is defined as the objective decreases to be less than 1e-5 in one iteration. To conduct fair comparisons, in all DTW related algorithms and all datasets, we set the DTW window size to be the best warping size reported in [Chen et al., 2015].