Zero-Shot ECG Classification with Multimodal Learning and Test-time Clinical Knowledge Enhancement
Authors: Che Liu, Zhongwei Wan, Cheng Ouyang, Anand Shah, Wenjia Bai, Rossella Arcucci
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Based on MERL, we perform the first benchmark across six public ECG datasets, showing the superior performance of MERL compared against e SSL methods. Notably, MERL achieves an average AUC score of 75.2% in zero-shot classification (without training data), 3.2% higher than linear probed e SSL methods with 10% annotated |
| Researcher Affiliation | Academia | 1Data Science Institute, Imperial College London, UK 2Department of Earth Science and Engineering, Imperial College London, UK 3Ohio State University, Columbus, US 4Department of Engineering Science, University of Oxford, Oxford, UK 5Institute of Clinical Sciences, Imperial College London, UK 6Department of Infectious Disease Epidemiology, Imperial College London, UK 7Royal Brompton and Harefield Hospitals, UK 8Department of Computing, Imperial College London, UK 9Department of Brain Sciences, Imperial College London, UK. |
| Pseudocode | No | The paper describes its methods textually and with mathematical equations but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | All code can be accessed at 2 https://github.com/cheliu-computation/MERL |
| Open Datasets | Yes | MIMIC-ECG. In our study, we pre-train the MERL framework on the MIMIC-ECG dataset (Gow et al.). [...] PTBXL. This dataset (Wagner et al., 2020) [...] CPSC2018. This publicly accessible dataset (Liu et al., 2018) [...] Chapman-Shaoxing-Ningbo (CSN). This publicly accessible dataset (Zheng et al., 2020; 2022) |
| Dataset Splits | Yes | We follow the official data split (Wagner et al., 2020) for the train:val:test split. [...] We split the dataset as 70%:10%:20% for the train:val:test split. [...] Table 10. Details on Data Split. Dataset Number of Categories Train Valid Test PTBXL-Super (Wagner et al., 2020) 5 17,084 2,146 2,158 |
| Hardware Specification | Yes | All experiments conducted on eight NVIDIA A100-40GB GPUs. |
| Software Dependencies | No | The paper mentions software components like 'Med-CPT' and 'Adam W optimizer' but does not provide specific version numbers for these or other key software dependencies like programming languages or deep learning frameworks. |
| Experiment Setup | Yes | In pre-training stage, we employ a random initialized 1D-Res Net18 as the ECG encoder. For text encoding, we employ Med-CPT (Jin et al., 2023) by default. The impact of various text encoders on downstream performance is discussed in Sec 5. We select the Adam W optimizer, setting a learning rate of 2 10 4 and a weight decay of 1 10 5. We pre-train MERL for 50 epochs, applying a cosine annealing scheduler for learning rate adjustments. We maintain a batch size of 512 per GPU and Table 11. Hyperparameter settings on downstream tasks. Learning rate 0.001 Batch size 16 Epochs 100 Optimizer Adam W Learing rate scheduler Cosine anealing Warump steps 5 |