Toward a Well-Calibrated Discrimination via Survival Outcome-Aware Contrastive Learning

Authors: Dongjoon Lee, Hyeryn Park, Changhee Lee

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on multiple real-world clinical datasets demonstrate that our method outperforms state-of-the-art deep survival models in both discrimination and calibration.
Researcher Affiliation Academia Dongjoon Lee Chung-Ang University dongzza97@cau.ac.kr Hyeryn Park Chung-Ang University hyeryn2000@cau.ac.kr Changhee Lee Korea University changheelee@korea.ac.kr
Pseudocode Yes Please find the pseudo-code of Con Surv in H.
Open Source Code Yes 2Source code for Con Surv is available in https://github.com/dongzza97/Con Surv
Open Datasets Yes Datasets. We compare our proposed method and the benchmarks with the following four commonly used real-world clinical datasets: METABRIC, NWTCO, GBSG, FLCHAIN, SUPPORT, and SEER. For detailed descriptions of these datasets, please refer to D.1.
Dataset Splits Yes We split the data into train, test, and validation sets with a ratio of 0.64:0.20:0.16, and then apply min-max normalization to the input features.
Hardware Specification Yes The specification of the machine is CPU: INTEL XEON Gold 6240R, GPU: NVIDIA RTX A6000
Software Dependencies No The paper mentions general software components and links to GitHub repositories for benchmark models (e.g., DeepHit, DRSA, DCS, X-CAL), but it does not specify versions for its own core software dependencies like Python, PyTorch, or other libraries used in its implementation.
Experiment Setup Yes We perform a random search for hyperparameter optimization including the batch size, hidden dimension, depth, learning rates, corruption rates, σ, α, and ν on the validation set and choose the settings with the best performance for Con Surv on each dataset. Table 12 describes the model specifications for the evaluated datasets.