BeatGAN: Anomalous Rhythm Detection using Adversarially Generated Time Series
Authors: Bin Zhou, Shenghua Liu, Bryan Hooi, Xueqi Cheng, Jing Ye
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that Beat GAN accurately and efficiently detects anomalous beats in ECG time series, and routes doctors attention to anomalous time ticks, achieving accuracy of nearly 0.95 AUC, and very fast inference (2.6 ms per beat). |
| Researcher Affiliation | Academia | Bin Zhou1 , Shenghua Liu1 , Bryan Hooi2 , Xueqi Cheng1 and Jing Ye3 1Institute of Computing Technology, Chinese Academy of Sciences 2School of Computer Science, National University of Singapore 3Department of Anesthesiology, Nanfang Hospital, Southern Medical University |
| Pseudocode | Yes | Algorithm 1 Training algorithm |
| Open Source Code | Yes | Reproducibility: Beat GAN is open-sourced 1. 1https://github.com/Vniex/Beat GAN |
| Open Datasets | Yes | We evaluate our proposed model on ECG time series from MIT-BIH Arrhythmia Database2 [Moody and Mark, 2001]. 2https://physionet.org/cgi-bin/atm/ATM?database=mitdb |
| Dataset Splits | Yes | Moreover, we use 5-fold cross-validation for each method, and report the averaged metrics and standard deviations (std). |
| Hardware Specification | Yes | We ran the inferences of Beat GAN, Ano GAN and Ganomaly, which use neural networks, on a server with a Tesla K80 GPU, on ECG data, all implemented in Py Torch. |
| Software Dependencies | No | The paper mentions 'all implemented in Py Torch' but does not provide specific version numbers for PyTorch or other software dependencies. |
| Experiment Setup | Yes | We choose 320 time ticks as the window size for a beat: 140 time ticks before the given R-peak and 180 ticks after it. We set the dimension size of latent space as 50, λ = 1.0 for objective (6) and k = 16 for data augmentation. We also set an experiment of adding anomalous data to training data for evaluating robustness. The structure of GD learns the architecture of the generator from DCGAN [Radford et al., 2015]. We use 5 1D transposed convolutional layers followed by batch-norm and leaky Re LU activation, with slope of the leak set to 0.2. The transposed convolutional kernel s size and number of each layer are 512(10/1)-256(4/2)-128(4/2)-64(4/2)-32(4/2): e.g. 512(10/1) means that the number of filters is 512, the size of filter is 10 and the stride is 1. GE s structure is a mirrored version of GD and D has the same architectural details as GE. We use Adam optimizer with an initial learning rate lr = 0.0001, and momentums β1 = 0.5, β2 = 0.999. |