CNN Kernels Can Be the Best Shapelets

Authors: Eric Qu, Yansen Wang, Xufang Luo, Wenqiang He, Kan Ren, Dongsheng Li

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that Shape Conv can achieve state-of-the-art performance on time-series benchmarks without sacrificing interpretability and controllability. We evaluate our Shape Conv model on time-series classification tasks using the UCR univariate time-series dataset (Dau et al., 2019) and UAE multivariate times series dataset (Bagnall et al., 2018). Our experiments on various benchmark datasets showed that Shape Conv outperforms other shapelet-based methods and state-of-the-art time-series classification and clustering models.
Researcher Affiliation Collaboration Eric Qu1, Yansen Wang2, Xufang Luo2, Wenqiang He3, Kan Ren4, Dongsheng Li2 1University of California, Berkeley, 2Microsoft Research Asia, 3University of Science and Technology of China, 4Shanghai Tech University ericqu@berkeley.edu, {yansenwang,xufluo,dongsheng.li}@microsoft.com, wenqianghe@mail.ustc.edu.cn,renkan@shanghaitech.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks. The methods are described in narrative text and mathematical formulations, but not in a step-by-step, code-like format.
Open Source Code No The paper does not provide any specific links to a code repository or explicit statements about the release or availability of the source code for the methodology described.
Open Datasets Yes We evaluate our Shape Conv model on time-series classification tasks using the UCR univariate time-series dataset (Dau et al., 2019) and UAE multivariate times series dataset (Bagnall et al., 2018). The training set is divided into training and validation sets at an 8:2 ratio. We evaluate our Shape Conv model on time-series clustering task using 36 UCR univariate time-series datasets (Dau et al., 2019).
Dataset Splits Yes The training set is divided into training and validation sets at an 8:2 ratio. Hyperparameters are tuned via grid search based on validation set performance, and they are reported in Appendix G.2.
Hardware Specification Yes All experiments are performed on the Py Torch framework using a 24-cores AMD Epyc 7V13 2.5GHz CPU, 220GB RAM, and an NVIDIA A100 80GB PCIe GPU.
Software Dependencies No The paper mentions "All experiments are performed on the Py Torch framework" in Appendix G.1. However, it does not specify the version number for PyTorch or list any other specific software dependencies with their version numbers required for reproducibility.
Experiment Setup Yes Hyperparameters are tuned via grid search based on validation set performance, and they are reported in Appendix G.2. The number of shapelets is chosen from {1,2,3,4,5} times the number of classes, and the shapelet length is evaluated over {0.1,0.2,...,0.8} times the time series length. The parameter λshape is chosen from {0.01,0.1,1,10} and the parameter λdiv is evaluated over {0.01,0.1,1,10}. Learning rate is chosen from {0.001,0.005,0.01,0.05,0.1}.