ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System

Authors: Huangxun Chen, Chenyu Huang, Qianyi Huang, Qian Zhang, Wei Wang3446-3453

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental Results In this section, we first introduce the victim DNN-based ECG classification system for attack scheme evaluation, then evaluate our attacks in two threat models respectively4. [4https://github.com/codespace123/ECGadv]
Researcher Affiliation Academia Huangxun Chen,1 Chenyu Huang,1 Qianyi Huang,2,1 Qian Zhang,1 Wei Wang3 1The Hong Kong University of Science and Technology 2Southern University of Science and Technology, Peng Cheng Laboratory 3Huazhong University of Science and Technology {hchenay, chuangak}@connect.ust.hk, huangqy@sustech.edu.cn, qianzh@cse.ust.hk, weiwangw@hust.edu.cn Co-primary Authors
Pseudocode No The paper describes the attack strategies and objective functions but does not present pseudocode or formally labeled algorithm blocks.
Open Source Code Yes Experimental Results In this section, we first introduce the victim DNN-based ECG classification system for attack scheme evaluation, then evaluate our attacks in two threat models respectively4. 4https://github.com/codespace123/ECGadv
Open Datasets Yes In the Physionet/Computing in the Cardiology Challenge 2017 (Clifford et al. 2017), (Andreotti et al. 2017) reproduced the approach by (Rajpurkar et al. 2017) on the Phy DB dataset and achieved a good performance. The model is the representative of the current state-of-the-art in arrhythmia classification. Both their algorithm and model are available in open-source5. Phy DB dataset consists of 8,528 short single-lead ECG segments labeled as 4 classes: normal rhythm(N), atrial fibrillation(A), other rhythm(O) and noise( ).
Dataset Splits No The paper describes the 'attack dataset' and how segments were selected for attack evaluation, but it does not specify explicit training, validation, and test dataset splits with percentages or counts for reproducibility of the victim model or their own adversarial attack generation process.
Hardware Specification Yes To further quantify the efficiency, we run the adversarial attacks with different metrics: Soft-DTW, smoothness metric and L2 norm. Both the computing resources (AWS c5.2xlarge instances) and the victim ECGs are the same.
Software Dependencies No The paper mentions software components like 'Adam optimizer' and 'Clever Hans framework' but does not specify their version numbers or other key software dependencies with specific versions (e.g., Python, PyTorch/TensorFlow).
Experiment Setup Yes We implement our attack strategy for Type I Attack under the framework of Clever Hans (Papernot et al. 2018). We adopt the Adam optimizer (Kingma and Ba 2014) with 0.005 learning rate to search for adversarial examples.