Joint-Label Learning by Dual Augmentation for Time Series Classification
Authors: Qianli Ma, Zhenjing Zheng, Jiawei Zheng, Sen Li, Wanqing Zhuang, Garrison W. Cottrell8847-8855
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments conducted on extensive time-series datasets show that Job DA can improve the model performance on small datasets. Moreover, we verify that Job DA has better generalization ability compared with conventional data augmentation, and the visualization analysis further demonstrates that Job DA can learn more compact clusters. Experiments Experimental Setup Datasets. We conduct experiments on the UCR time series classification archive1 (Chen et al. 2015) to compare the proposed method with other methods. ... Implementation Details. Keras 2.2.42 is used to implement all our experiments... |
| Researcher Affiliation | Academia | Qianli Ma1,2, Zhenjing Zheng1, Jiawei Zheng1, Sen Li1, Wanqing Zhuang1, Garrison W. Cottrell3 1School of Computer Science and Engineering, South China University of Technology, Guangzhou 2Key Laboratory of Big Data and Intelligent Robot (South China University of Technology), Ministry of Education 3Department of Computer Science and Engineering, University of California, San Diego, CA, USA qianlima@scut.edu.cn, 982360227@qq.com |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper mentions that Keras 2.2.4 is used for implementation and cites external work, but does not explicitly state that the authors' own source code for the proposed method is publicly available or provide a link to it. |
| Open Datasets | Yes | Datasets. We conduct experiments on the UCR time series classification archive1 (Chen et al. 2015) to compare the proposed method with other methods. The UCR time series classification archive contains 85 publicly available time-series datasets, and each dataset was split into training and testing set using the standard split. (1https://www.cs.ucr.edu/ eamonn/time series data/) |
| Dataset Splits | No | The UCR time series classification archive contains 85 publicly available time-series datasets, and each dataset was split into training and testing set using the standard split. (the UCR time series archive does not have holdout set splits). The paper does not specify the exact percentages or counts for training/testing splits, nor does it explicitly detail a validation split or cross-validation setup for reproduction. |
| Hardware Specification | Yes | Implementation Details. Keras 2.2.42 is used to implement all our experiments, which run on an Intel Core i7-6850K 3.60GHz CPU, 64GB RAM, and a Ge Force GTX 1080-Ti 11G GPU. |
| Software Dependencies | Yes | Keras 2.2.42 is used to implement all our experiments |
| Experiment Setup | Yes | Implementation Details. Keras 2.2.42 is used to implement all our experiments... We perform four TSW-based transformations (including the original time series) on each time series of the training set for sample augmentation. In addition to the original time series, the number of subsequences N used in the other three transformations are 2, 4, and 8, respectively. The loss function is categorical cross-entropy. We choose the model architecture that achieves the lowest training loss and report its performance on the test set... The classification accuracy is used to evaluate the performance of the model, and the macro-F1 score (Yang 1999) is used for class-imbalanced classification. To reduce the impact of random initialization, we run each experiment five times and report the mean and standard deviation. |