Contrast Everything: A Hierarchical Contrastive Framework for Medical Time-Series

Authors: Yihe Wang, Yu Han, Haishuai Wang, Xiang Zhang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments in the challenging patient-independent setting. We compare COMET against six baselines using three diverse datasets, which include ECG signals for myocardial infarction and EEG signals for Alzheimer s and Parkinson s diseases. The results demonstrate that COMET consistently outperforms all baselines, particularly in setup with 10% and 1% labeled data fractions across all datasets.
Researcher Affiliation Academia Yihe Wang University of North Carolina Charlotte ywang145@uncc.edu Yu Han University of Chinese Academy of Sciences hanyu21@mails.ucas.ac.cn Haishuai Wang Zhejiang University haishuai.wang@zju.edu.cn Xiang Zhang University of North Carolina Charlotte xiang.zhang@uncc.edu
Pseudocode No The paper describes the model architecture and training process in detail and includes diagrams (e.g., Figure 2), but it does not contain any formally labeled pseudocode blocks or algorithms.
Open Source Code Yes The source code is available at https://github.com/DL4m Health/COMET.
Open Datasets Yes Datasets. (1) AD [44] has 23 patients, 663 trials, and 5967 multivariate EEG samples. (2) PTB [45] has 198 patients, 6237 trials, and 62370 multivariate ECG samples. (3) TDBRAIN [46] has 72 patients, 624 trials, and 11856 multivariate EEG samples.
Dataset Splits Yes All datasets are split into training, validation, and test sets in patientindependent setting (Figure 3)... (1) AD [44] has 23 patients, 663 trials, and 5967 multivariate EEG samples. There are 4329, 891, and 747 samples in training, validation, and test sets.
Hardware Specification Yes All experiments expect baseline Sim CLR run on NVIDIA RTX 4090. Baseline Sim CLR runs on Google Colab NVIDIA A100.
Software Dependencies No In Appendix E, the paper mentions using 'Py Torch' and 'logistic regression from the Sklearn library'. However, it does not specify version numbers for these software dependencies (e.g., 'PyTorch 1.x' or 'Sklearn 0.y.z'), which are necessary for reproducible software details.
Experiment Setup Yes During contrastive pre-training, we set the learning rate to 0.0001. The pre-training batch size is 256, and the total number of pre-training epochs is 100. ... The hyperparameters λ1, λ2, λ3, λ4 are assigned values of (0.25, 0.25, 0.25, 0.25), (0.1, 0.7, 0.1, 0.1), and (0.25, 0.25, 0.25, 0.25) for the AD, PTB, and TDBrain datasets, respectively.