Adversarial Dynamic Shapelet Networks
Authors: Qianli Ma, Wanqing Zhuang, Sen Li, Desen Huang, Garrison Cottrell5069-5076
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments conducted on extensive time series data sets show that ADSN is state-of-the-art compared to existing shapelet-based methods. The visualization analysis also shows the effectiveness of dynamic shapelet generation and adversarial training. We conduct experiments on the 85 UCR (Chen et al. 2015) and 8 UEA (Hills et al. 2014) time series datasets. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, South China University of Technology, Guangzhou, China 2Department of Computer Science and Engineering, University of California, San Diego, CA, USA |
| Pseudocode | Yes | The pseudo code of ADSN is shown in section F of the supplementary material. |
| Open Source Code | Yes | The supplementary material mentioned in this paper is available on github1. 1https://github.com/qianlima-lab/ADSN. |
| Open Datasets | Yes | We conduct experiments on the 85 UCR (Chen et al. 2015) and 8 UEA (Hills et al. 2014) time series datasets. The statistics of these 26 datasets are shown in section A of the supplementary material. |
| Dataset Splits | Yes | Each data set was split into training and testing set using the standard split. The hyper-parameters of ADSN are tuned through a grid search approach based on cross validation. |
| Hardware Specification | Yes | The experiments are run on the Tensor Flow platform using an Intel Core i7-6850K, 3.60-GHz CPU, 64-GB RAM and a Ge Force GTX 1080-Ti 11G GPU. |
| Software Dependencies | No | The paper mentions "Tensor Flow platform" and "The Adam (Kingma and Ba 2014) optimizer" but does not specify their version numbers. |
| Experiment Setup | Yes | λdiv and λadv are set to 0.01 and 0.05, respectively. The hyper-parameters of ADSN are tuned through a grid search approach based on cross validation. The number of shapelets is chosen from k {30, 60, 90, 120}. The dropout rate applied to the softmax layer is evaluated over {0, 0.25, 0.5}. We choose the shapelet lengths according to the length of the time series. ... The Adam (Kingma and Ba 2014) optimizer is employed with an initial learning rate of 0.001. |