Learning Representations for Time Series Clustering

Authors: Qianli Ma, Jiawei Zheng, Sen Li, Gary W. Cottrell

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments conducted on extensive time series datasets show that DTCR is state-of-the-art compared to existing methods.
Researcher Affiliation Academia Qianli Ma South China University of Technology Guangzhou, China qianlima@scut.edu.cn Jiawei Zheng South China University of Technology Guangzhou, China csjwzheng@foxmail.com Sen Li South China University of Technology Guangzhou, China awslee@foxmail.com Garrison W. Cottrell University of California, San Diego CA, USA gary@ucsd.edu
Pseudocode Yes Algorithm 1 DTCR Training Method
Open Source Code No The paper provides links to code for other methods (DTC, DEC, IDEC) in footnotes, but does not state that its own code for DTCR is open-source or provide a link for it.
Open Datasets Yes Following the protocol used in [20, 24, 5, 25, 29], we conduct experiments on the 36 UCR [30] time series datasets to evaluate performance. The statistics of these 36 datasets are shown in Table 1 of the Supplementary Material. Each data set has a default train/test split. We adopted the protocol used in USSL [29], training on the training set and evaluating on the test set for comparison.
Dataset Splits No Each data set has a default train/test split. We adopted the protocol used in USSL [29], training on the training set and evaluating on the test set for comparison.
Hardware Specification Yes The experiments are run on the Tensor Flow [32] platform using an Intel Core i7 6850K, 3.60-GHz CPU, 64-GB RAM and a Ge Force GTX 1080-Ti 11G GPU.
Software Dependencies No The paper mentions 'Tensor Flow [32] platform' and 'Adam [33] optimizer' but does not provide specific version numbers for these software components.
Experiment Setup Yes In our experiments, we fixed the number of layers and the number of dilation per layer to 3 and 1, 4, and 16, respectively. The decoder is a single-layer RNN. Gated Recurrent Units (GRU) are used in the RNNs [31]. The number of units per layer of the encoder is [m1, m2, m3] {[100, 50, 50], [50, 30, 30]}. The number of hidden units in the decoder is (m1 + m2 + m3) 2. The λ of Eq. (9) {1, 1e 1, 1e 2, 1e 3}. The batch size is 2N. The Adam [33] optimizer is employed with an initial learning rate of 5e 3.