Imaging Time-Series to Improve Classification and Imputation

Authors: Zhiguang Wang, Tim Oates

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results demonstrate our approaches achieve the best performance on 9 of 20 standard dataset compared with 9 previous and current best classification methods. The imputation MSE on test data is reduced by 12.18%-48.02% compared to using the raw data.
Researcher Affiliation Academia Zhiguang Wang and Tim Oates Department of Computer Science and Electric Engineering University of Maryland, Baltimore County {stephen.wang, oates}@umbc.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets Yes We apply Tiled CNNs to classify time series using GAF and MTF representations on 20 datasets from [Keogh et al., 2011]
Dataset Splits Yes The datasets are pre-split into training and testing sets to facilitate experimental comparisons. we use a linear soft margin SVM [Fan et al., 2008] and select C by 5-fold cross validation over {10 4, 10 3, . . . , 104} on the training set.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models) used for running its experiments.
Software Dependencies No The paper mentions "Theano [Bastien et al., 2012]" but does not specify its version number, nor does it list other software dependencies with version numbers.
Experiment Setup Yes In our experiments, the size of the GAF image is regulated by the the number of PAA bins SGAF. The Tiled CNN is trained with image size {SGAF , SMT F } {16, 24, 32, 40, 48} and quantile size Q {8, 16, 32, 64}. At the last layer of the Tiled CNN, we use a linear soft margin SVM [Fan et al., 2008] and select C by 5-fold cross validation over {10 4, 10 3, . . . , 104} on the training set. For the DA models we use batch gradient descent with a batch size of 20. Optimization iterations run until the MSE changed less than a threshold of 10 3 for GASF and 10 5 for raw time series. A single hidden layer has 500 hidden neurons with sigmoid functions. we randomly set 20% of the raw data among a specific time series to be zero (salt-and-pepper noise).