Learnable Dynamic Temporal Pooling for Time Series Classification

Authors: Dongha Lee, Seonghyeon Lee, Hwanjo Yu8288-8296

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both univariate and multivariate time series datasets show that our proposed pooling significantly improves the classification performance.
Researcher Affiliation Academia Dongha Lee1, Seonghyeon Lee2, Hwanjo Yu2 1Institute of Artificial Intelligence, POSTECH, Republic of Korea 2Dept. of Computer Science and Engineering, POSTECH, Republic of Korea
Pseudocode Yes Algorithm 1: Forward and backward recursions to compute DTWγ(P, H) and PDTWγ(P, H)
Open Source Code No The paper does not provide a specific link or explicit statement about the release of its source code.
Open Datasets Yes we use 85 univariate time series datasets and 30 multivariate time series datasets from the UCR/UEA repository (Bagnall et al. 2018; Dau et al. 2018).
Dataset Splits Yes For all the datasets, we repeatedly train each classifier three times with different random seeds, and report the median accuracy. ... we use 85 univariate time series datasets and 30 multivariate time series datasets from the UCR/UEA repository (Bagnall et al. 2018; Dau et al. 2018).
Hardware Specification No No specific hardware details such as GPU/CPU models, memory, or detailed computer specifications used for running experiments are provided.
Software Dependencies No The paper mentions 'Py Torch' and 'Numba compiler' but does not specify version numbers for these software dependencies, which is required for reproducibility.
Experiment Setup Yes Table 1: Hyperparameters for CNN architectures and their optimization. We follow the setting provided by the previous work (Wang, Yan, and Oates 2017; Fawaz et al. 2019). Network Optimizer Epochs Batch size Learn. rate FCN Adam 500 16 0.0001 Res Net Adam 500 64 0.0001