TapNet: Multivariate Time Series Classification with Attentional Prototypical Network
Authors: Xuchao Zhang, Yifeng Gao, Jessica Lin, Chang-Tien Lu6845-6852
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on 18 datasets in a public UEA Multivariate time series archive with eight state-of-the-art baseline methods exhibit the effectiveness of the proposed model. |
| Researcher Affiliation | Academia | 1Discovery Analytics Center, Virginia Tech, Falls Church, VA 2Department of Computer Science, George Mason University, Fairfax, VA |
| Pseudocode | No | The paper describes its proposed methods in prose and with architectural diagrams, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code and more experimental results are public to the research community and it can be accessed at https://github.com/xuczhang/tapnet. |
| Open Datasets | Yes | We evaluate the proposed method on 18 datasets from latest multivariate time series classification archive (Bagnall et al. 2018)1. Datasets are available at http://timeseriesclassification.com. |
| Dataset Splits | No | The paper mentions "training/test split" for some datasets in the semi-supervised section (Table 3), but it does not explicitly provide details about a separate validation split or a general splitting methodology for all datasets used in the main experiments. |
| Hardware Specification | Yes | All the experiments are conducted on a single Tesla P100 GPU with 16GB memory. |
| Software Dependencies | No | The paper describes the software components and frameworks used (e.g., LSTM, convolutional layers, Adam algorithm, t-SNE) but does not provide specific version numbers for any of them. |
| Experiment Setup | No | The paper states: "The details of the parameter settings can be found in the Appendix." As these details are not in the main text and no specific hyperparameters or training configurations are mentioned there, the answer is no. |