Time Series Anomaly Detection with Multiresolution Ensemble Decoding
Authors: Lifeng Shen, Zhongzhong Yu, Qianli Ma, James T. Kwok9567-9575
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive empirical studies on real-world benchmark data sets demonstrate that the proposed RAMED model outperforms recent strong baselines on time series anomaly detection. |
| Researcher Affiliation | Academia | 1 Department of Computer Science and Enginering, Hong Kong University of Science and Technology, Hong Kong 2 School of Computer Science and Engineering, South China University of Technology, Guangzhou 3 Key Laboratory of Big Data and Intelligent Robot (South China University of Technology), Ministry of Education |
| Pseudocode | Yes | Algorithm 1 Recurrent Autoencoder with Multiresolution Ensemble Decoding (RAMED). |
| Open Source Code | No | The paper provides links to baseline implementations (RAE, RRN, Beat GAN, RAE-ensemble) but does not provide a link or explicit statement about the availability of the source code for their proposed RAMED model. |
| Open Datasets | Yes | ECG, 2D-gesture and Power-demand are from http://www.cs.ucr.edu/ eamonn/discords/, while Yahoo s S5 is from https://webscope.sandbox.yahoo.com/. |
| Dataset Splits | Yes | We use 30% of the training set for validation, and the rest for actual training. The model with the lowest reconstruction loss on the validation set is selected for evaluation. For Yahoo s S5, the available data set is split into three parts: with 40% of the samples for training, another 30% for validation, and the remaining 30% for testing. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, or cloud computing resources). |
| Software Dependencies | No | The paper mentions the 'Adam optimizer (Kingma and Ba 2015)' but does not provide specific version numbers for Adam or any other software libraries or dependencies used in the experiments. |
| Experiment Setup | Yes | We use 3 encoders and 3 decoders. Each encoder and decoder is a single-layer LSTM with 64 units. We perform grid search on the hyperparameter β in (7) from {0.1, 0.2, . . . , 0.9}, λ in (12) from {10 4, 10 3, 10 2, 10 1, 1}, τ in (6) is set to 3 and γ in (10) is set to 0.1. The Adam optimizer (Kingma and Ba 2015) is used with an initial learning rate of 10 3. |