12-Lead ECG Reconstruction via Koopman Operators
Authors: Tomer Golany, Kira Radinsky, Daniel Freedman, Saar Minha
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform an empirical evaluation using 12-lead ECG signals from thousands of patients, and show that we are able to reconstruct the signals in such way that enables accurate clinical diagnosis. |
| Researcher Affiliation | Collaboration | 1Technion Israel Institute of Technology, Haifa, Israel 2Google Research 3Shamir Medical Center, Zerifin, Israel and Sackler School of Medicine, Tel-Aviv University, Israel. |
| Pseudocode | No | The paper describes the steps for reconstruction using mathematical formulations and descriptive text, but it does not provide a formal pseudocode block or algorithm block. |
| Open Source Code | Yes | We share the code for the reproducibility of our results 1Link anonymized |
| Open Datasets | Yes | The Georgia 12-lead ECG dataset, referred to as G12EC, was introduced in the 12-lead ECG Physionet Challenge 2020 (Alday et al., 2020) and is considered one of the largest public 12-lead ECG datasets. |
| Dataset Splits | No | The paper mentions a validation set was used for model selection ("the final model being the one with the best accuracy on the validation set") but does not provide specific split percentages or sample counts for it. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using a "Residual Neural Network (He et al., 2016)" and "Adam Optimizer", but does not specify version numbers for any software dependencies. |
| Experiment Setup | Yes | The network was trained by feeding 12-lead ECG batches of size 128 from the training data. The binary cross-entropy loss was minimized using Adam Optimizer with initial learning rate 0.0001. The training ran for 100 epochs, with the final model being the one with the best accuracy on the validation set. |