Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
H-Tuning: Toward Low-Cost and Efficient ECG-based Cardiovascular Disease Detection with Pre-Trained Models
Authors: Rushuang Zhou, Yuanting Zhang, Yining Dong
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on four ECG datasets demonstrate that H-Tuning reduces the GPU memory consumption during fine-tuning by 6.34 times while achieving comparable CVDs detection performance to standard fine-tuning. With the knowledge distillation technique, the model inference latency and the memory consumption are reduced by 4.52 times and 19.83 times. |
| Researcher Affiliation | Academia | 1Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China 2Hong Kong Center for Cerebro-Cardiovascular Health Engineering, Hong Kong, China 3Department of Electronic Engineering, Chinese University of Hong Kong, Hong Kong, China 4Hong Kong Institutes of Medical Engineering, Hong Kong, China 5The AICARE Bay Lab, Guangdong Medical University, Dong Guan, China 6Department of Data Science, City University of Hong Kong, Hong Kong, China. Correspondence to: Yining Dong <EMAIL>. |
| Pseudocode | Yes | A.2. Algorithm of H-Tuning The algorithm of the proposed H-Tuning is presented in Algorithm 1. |
| Open Source Code | Yes | Code is available at https://github.com/KAZABANA/H-Tuning |
| Open Datasets | Yes | In this study, the Chapman-Shaoxing database (Zheng et al., 2020b), the Georgia 12-lead ECG Challenge (G12EC) database (Alday et al., 2020), the Physikalisch-Technische Bundesanstalt (PTB-XL) database (Wagner et al., 2020), and the Ningbo database (Zheng et al., 2020a) are used for the performance evaluation of our H-Tuning framework. The four datasets were also included in the Physionet 2020/2021 challenge (Alday et al., 2020; Reyna et al., 2022). |
| Dataset Splits | Yes | For each dataset, a training set and a held-out test set are randomly sampled in a ratio of 1: 9. Then, a validation set is collected from the training set and accounts for 20% of it. |
| Hardware Specification | Yes | All the experiments are conducted in a single NVIDIA A6000 graphics processing unit using the Pytorch library. |
| Software Dependencies | No | All the experiments are conducted in a single NVIDIA A6000 graphics processing unit using the Pytorch library. Adam optimizer is utilized to conduct the gradient descent process defined in Eq.(9), with a learning rate of ฮท = 0.002. While PyTorch is mentioned, a specific version is not provided, and Adam optimizer is a method, not a software library with a version. |
| Experiment Setup | Yes | Adam optimizer is utilized to conduct the gradient descent process defined in Eq.(9), with a learning rate of ฮท = 0.002. The batch size N and N1 defined in the proposed mix-order optimization are set to 128 and 2, respectively. Additionally, the controlling weight ฮป is searched within a set of {0.85, 0.90, 0.95, 0.99}. The perturbation scale ยต for the SPSA process is searched within a set of {0.001, 0.0001}, and the number of queries n is set to 1. The rank r of the low-rank adaptation process is set to 16, and the number of deep layers M is set to 2. |