Guiding Masked Representation Learning to Capture Spatio-Temporal Relationship of Electrocardiogram

Authors: Yeongyeon Na, Minje Park, Yunwon Tae, Sunghoon Joo

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental ST-MEM outperforms other SSL baseline methods in various experimental settings for arrhythmia classification tasks. Moreover, we demonstrate that ST-MEM is adaptable to various lead combinations. Through quantitative and qualitative analysis, we show a spatio-temporal relationship within ECG data. In this section, we examine the results of our experiments, evaluating them both quantitatively and qualitatively to verify the effectiveness of ST-MEM. Additional experimental results are reported in Appendix B.
Researcher Affiliation Industry Yeongyeon Na , Minje Park , Yunwon Tae , and Sunghoon Joo VUNO Inc. {yeongyeon.na, minje.park, yunwon.tae, sunghoon.joo}@vuno.co
Pseudocode No The paper describes the method conceptually and visually (Figure 3) but does not provide any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/bakqui/ST-MEM.
Open Datasets Yes PTB-XL, Chapman, Ningbo and CPSC2018 : https://physionet.org/content/ challenge-2021/1.0.3/ CODE-15 : https://zenodo.org/record/4916206#.YUG9MStxe Ul Physio Net2017 : https://physionet.org/content/challenge-2017/1. 0.0/
Dataset Splits Yes Dividing downstream datasets into train, validation and test set. Finally, regarding the downstream datasets, they are divided into training, validation, and test sets, following a 70-10-20 configuration. Table 10 provides the preprocessing steps for PTB-XL, along with information about the utilized train, validation, and test sets. Likewise, Table 11 presents information regarding CPSC2018, while Table 12 outlines details concerning Physio Net2017.
Hardware Specification Yes For environment details, all experiments examined with Ubuntu 20.04.6, AMD EPYC 7502 32Core Processor, and NVIDIA Ge Force RTX 3080 Ti.
Software Dependencies Yes The version of the libraries we used in all experiments are 3.9.13 for Python and 1.11.0 for Py Torch.
Experiment Setup Yes Further details of hyperparameters used in each pre-training is shown in Table 6. Table 6: Hyperparameter settings. Pre-training Fine-tuning Linear evaluation Backbone Vi T-B Vi T-B Vi T-B Learning rate 0.0012 0.001 0.001 Batch size 2048 1024 32 Epochs 800 100 100 Optimizer Adam W Adam W Adam W Learning rate scheduler Cosine anealing Cosine anealing Cosine anealing Warump steps 40 5 5