Contrastive Learning for Clinical Outcome Prediction with Partial Data Sources
Authors: Meng Xia, Jonathan Wilson, Benjamin Goldstein, Ricardo Henao
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present experiments on two real-world datasets demonstrating that CLOPPS consistently outperforms strong baselines in several practical scenarios. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, Duke University, Durham, US 2Department of Biostatistics and Bioinformatics, Duke University, Durham, US 3King Abdullah University of Science and Technology, Thuwal, KSA. |
| Pseudocode | No | The paper does not contain any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code used in the experiments is available at https://github.com/mx41-m/Contrastive-Learning.git. |
| Open Datasets | Yes | Given that the Private dataset is not readily publicly accessible, we also validate CLOPPS using the MIMIC-III clinical database (Johnson et al., 2016). |
| Dataset Splits | Yes | These sequences were then divided into training, validation, and test datasets following an 8 : 1 : 1 ratio. |
| Hardware Specification | No | The paper mentions that models were trained and experiments conducted, but it does not provide any specific details about the hardware used, such as GPU models, CPU specifications, or memory. |
| Software Dependencies | No | The paper mentions using the Hugging Face's transformers library, but it does not specify any version numbers for this or any other software dependencies, which is necessary for reproducibility. |
| Experiment Setup | Yes | The encoders for CLOPPS are trained for 50, 100 and 100 epochs on MMNIST, Private and MIMIC, respectively. The classifiers for CLOPPS are trained for 10, 5 and 5 epochs on MMNIST, Private and MIMIC, respectively. In CLOPPS, the values for τ, w and d are set to 0.1, 2 and 12, respectively, based on experimental results. For all models (excluding Elastic Net), Adam W (Loshchilov & Hutter, 2017) is employed as the optimizer. The values for the learning rate, beta, weight decay and batch size are set for all models to 10 4, (0.9, 0.999), 0.01, and 64, respectively. |