Energy Efficient Streaming Time Series Classification with Attentive Power Iteration

Authors: Hao Huang, Tapan Shah, Scott Evans, Shinjae Yoo

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our approach excels in classification accuracy and energy efficiency, with over 70% less consumption and threefold faster task completion than benchmarks. Our evaluation encompasses five public benchmark datasets (Table 1). The Epilepsy dataset (Bagnall et al. 2018) simulates brain disorder recognition tasks among healthy participants. Handwriting (Shokoohi et al. 2017) captures smartwatch motion during alphabet writing. HAR6 (Karim et al. 2019) employs smartphone sensors for human activity recognition. DSA-19 (Altun et al. 2010) involves data on 19 daily and sports activities. Google-13 (Warden 2018) is an audio dataset for keyword spotting systems.
Researcher Affiliation Collaboration Hao Huang1, Tapan Shah1, Scott Evans1, Shinjae Yoo2 1GE Vernova Research, Niskayuna, NY, USA 2Brookhaven National Lab, Upton, NY, USA
Pseudocode Yes Algorithm 1: hist-PCA (Yang et al. 2018) and Algorithm 2: our Attentive Power Iteration (API)
Open Source Code No The paper mentions Code Carbon and provides a link (https://github.com/mlco2/codecarbon), but this is a third-party tool used for energy quantification, not the open-source code for the paper's methodology itself. No explicit statement of the authors' code release is found.
Open Datasets Yes Our evaluation encompasses five public benchmark datasets (Table 1). The Epilepsy dataset (Bagnall et al. 2018)... Handwriting (Shokoohi et al. 2017)... HAR6 (Karim et al. 2019)... DSA-19 (Altun et al. 2010)... Google-13 (Warden 2018)...
Dataset Splits Yes For each dataset, we perform 20 runs, randomly partitioning the dataset into training, validation, and test sets following proportions outlined in Table 1.
Hardware Specification Yes Experiments were conducted on a local machine equipped with 32GB of memory and an Intel Core i9 processor running at 2.9 GHz, without GPU support.
Software Dependencies No The paper mentions "Py Torch framework" and "Adam optimizer" and "Code Carbon" but does not provide specific version numbers for any of these software dependencies.
Experiment Setup Yes All experiments were implemented using the Py Torch framework. We employed a training batch size of 100 and utilized the Adam optimizer with a learning rate of 1 × 10−3 for training our model. The parameter settings for our str API framework are as follows: 1) TCN consists of two layers. Both layers have a kernel size of 3 and output dimension of 30. 2) In Algorithm 2, the critical hyperparameters include the sketch dimensions p and attention space dimensions q. The sketch has a shape of Rm k, where m is the sketch dimensions and k is the number of sketches. To ensure ample representation capacity while maintaining a compact model size, we set k = 2. 3) Each batch contains 10 samples (embeddings) by default. 4) We employ κ = 1 iteration (while loop in Algorithm 2) for its effectiveness and high efficiency. We constructed small, medium, and large str API models by varying the sketch size p and attention dimensions q. Specifically, the small model has p = 20 and q = 20, the medium model has p = 25 and q = 25, while the large model has p = 30 and q = 40.