ShapeNet: A Shapelet-Neural Network Approach for Multivariate Time Series Classification
Authors: Guozhong Li, Byron Choi, Jianliang Xu, Sourav S Bhowmick, Kwok-Pan Chun, Grace Lai-Hung Wong8375-8383
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We have conducted experiments on Shape Net with competitive state-of-the-art and benchmark methods using UEA MTS datasets. The results show that the accuracy of Shape Net is the best of all the methods compared. |
| Researcher Affiliation | Academia | 1Department of Computer Science, Hong Kong Baptist University, Hong Kong 2School of Computing Engineering, Nanyang Technological University, Singapore 3Department of Geography, Hong Kong Baptist University, Hong Kong 4Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong |
| Pseudocode | No | The paper describes procedures and architectures using text and diagrams (e.g., Figure 1, Figure 2), but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | To promote reproducibility, our source code is made public at http://alturl.com/d26bo. |
| Open Datasets | Yes | A well-known benchmark of MTS datasets, the UEA ARCHIVE, was tested. Detailed information regarding the datasets can be obtained from (Bagnall et al. 2018). |
| Dataset Splits | No | The paper mentions 'The overall classification accuracy results for the datasets are presented in Table 1. The accuracy results of Shape Net are the mean values of 10 runs', but does not explicitly specify train/validation/test dataset splits (e.g., percentages or sample counts) needed for reproduction. |
| Hardware Specification | Yes | All the experiments were conducted on a machine with two Xeon E5-2630v3 @ 2.4GHz (2S/8C) / 128GB RAM / 64 GB SWAP and two NVIDIA Tesla K80, running on Cent OS 7.3 (64-bit). |
| Software Dependencies | No | The paper states 'We have implemented the proposed method1 in PYTHON' but does not provide specific version numbers for Python or any other key software libraries or frameworks used. |
| Experiment Setup | Yes | The batch size, the number of channels, the kernel size of the convolutional network, and the network depth are set to 10, 40, 3, and 10, respectively. The learning rate is kept fixed at the low value of η = 0.001, while the number of epochs for network training is 400. µ in Eq. 1 is set to 0.2, λ = 1 for the triplet loss function. The β in Eq. 8 is 0.5. |