Time-Aware Multi-Scale RNNs for Time Series Modeling

Authors: Zipeng Chen, Qianli Ma, Zhenxi Lin

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that the model outperforms state-of-the-art methods on multivariate time series classification and human motion prediction tasks.
Researcher Affiliation Academia 1School of Computer Science and Engineering, South China University of Technology, Guangzhou, China 2Key Laboratory of Big Data and Intelligent Robot (South China University of Technology), Ministry of Education
Pseudocode No The paper presents mathematical equations and architectural diagrams but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes 2https://github.com/qianlima-lab/TAMS-RNNs
Open Datasets Yes Following Tap Net [Zhang et al., 2020], we conduct experiments on 15 data sets from the latest MTS classification archive [Bagnall et al., 2018].Human 3.6M (H3.6M) data set [Ionescu et al., 2013].we choose the FMAsmall data set [Defferrard et al., 2016]
Dataset Splits Yes We follow the standard 80/10/10% data splitting protocols to get training, validation and testing sets
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running experiments.
Software Dependencies No The paper mentions software components like the Adam optimizer and dropout operation, but does not specify version numbers for any software libraries or frameworks used in the experiments.
Experiment Setup Yes For MTS classification...The number of layers of TAMS-LSTM is set to 2, the hidden state dimension is set to 256 (d = 256), and the hidden state of the final time step is used for classification. Meanwhile, the number of small hidden states is set to 4(K = 4) with the scale set {1, 2, 4, 8}. We apply the dropout operation [Srivastava et al., 2014] to the input time series X with dropout rate of 0.1. The gradient-based optimizer Adam [Kingma and Ba, 2014] is chosen, and the learning rate is set to be 0.001.