Learning Representations for Incomplete Time Series Clustering

Authors: Qianli Ma, Chuxin Chen, Sen Li, Garrison W. Cottrell8837-8846

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An experiment conducted on eight real-world incomplete time-series datasets shows that CRLI outperforms existing methods. We collected eight real-world incomplete time-series datasets from various existing works in several domains...and conducted experiments on these datasets to evaluate performance.
Researcher Affiliation Academia Qianli Ma1,2*, Chuxin Chen 1*, Sen Li 1*, Garrison W. Cottrell 3 1 South China University of Technology, Guangzhou, China 2 Key Laboratory of Big Data and Intelligent Robot (South China University of Technology), Ministry of Education 3 University of California, San Diego, CA, USA
Pseudocode Yes Algorithm 1 CRLI Training Method
Open Source Code No The paper does not explicitly state that the code for CRLI is open-source or provide a link to a repository for its implementation.
Open Datasets Yes We collected eight real-world incomplete time-series datasets from various existing works in several domains (Alizadeh et al. 2000; Bianchi, Mikalsen, and Jenssen 2017; Chen et al. 2002; Liang et al. 2005; Silva et al. 2012)(Dua and Graff 2017)1 and We firstly randomly drop the values of the first 20 data sets in the UCR time series data set archive (Chen et al. 2015) 2. Footnotes 1 and 2 provide URLs to these archives.
Dataset Splits No Following (Xie, Girshick, and Farhadi 2016; Guo et al. 2017; Madiraju et al. 2018; Ma et al. 2019), we train the model on the training set and evaluate it on the test set. There is no explicit mention of a validation set split.
Hardware Specification Yes The experiments are run on the Tensor Flow (Abadi et al. 2016) platform using an Intel Core i76850K, 3.60-GHz CPU, 64-GB RAM and a Ge Force GTX 1080-Ti 11G GPU.
Software Dependencies No The paper mentions 'Tensor Flow (Abadi et al. 2016) platform' but does not specify a version number for TensorFlow or any other software dependencies.
Experiment Setup Yes The number of layers of the encoder is l {1, 2}. The number of units of each layer in the encoder is h {50, 100}. The λ in Eq.(13) {1e-3, 1e-6, 1e9}. The batch size is 32. The Adam (Kingma and Ba 2015) optimizer is employed with an initial learning rate of 5e-3.