A New Attention Mechanism to Classify Multivariate Time Series
Authors: Yifan Hao, Huiping Cao
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | CA-SFCN is compared with 16 approaches using 14 different MTS datasets. The extensive experimental results show that the CA-SFCN outperforms state-of-the-art classification methods, and the cross attention mechanism achieves better performance than other attention mechanisms. |
| Researcher Affiliation | Academia | Yifan Hao and Huiping Cao New Mexico State University {yifan, hcao}@nmsu.edu |
| Pseudocode | No | The paper describes its approach using text and architectural diagrams (Figures 1 and 2) but does not provide a formal pseudocode or algorithm block. |
| Open Source Code | Yes | The source code can be found from https://github.com/huipingcao/nmsu yhao ijcai2020. |
| Open Datasets | Yes | 14 real-world datasets are used to test the performance of the proposed approaches [Dua and Graff, 2017; Karim et al., 2019] |
| Dataset Splits | No | The paper states that 14 real-world datasets are used and that the batch size for training is 128, but it does not specify the exact train/validation/test splits (percentages or counts) for these datasets. |
| Hardware Specification | Yes | All the methods are implemented using Python 3.7, and tested on a server with Intel Xeon Gold 5117 2.0G CPUs, 192GB RAM, and one Nvidia Tesla P100 GPU. |
| Software Dependencies | No | The paper mentions 'Python 3.7' and 'Tensor Flow' but does not provide a version number for TensorFlow, and only one software dependency (Python) has a specific version. |
| Experiment Setup | Yes | Adamoptimizer is used in the training process. The convolutional and pooling layers use the similar configuration as that in [Karim et al., 2019]. In particular, the convolutional layers contain three 2-D layers with filter sizes 8 1, 5 1, and 3 1, the corresponding filter numbers for the three layers are 128, 256, and 128. |