Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Time-to-Event Pretraining for 3D Medical Imaging

Authors: Zepeng Frazier Huo, Jason Fries, Alejandro Lozano, Jeya Maria Jose Valanarasu, Ethan Steinberg, Louis Blankemeier, Akshay Chaudhari, Curtis Langlotz, Nigam Shah

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using a dataset of 18,945 CT scans (4.2 million 2D images) and time-to-event distributions across thousands of EHR-derived tasks, our method improves outcome prediction, achieving an average AUROC increase of 23.7% and a 29.4% gain in Harrell s C-index across 8 benchmark tasks.
Researcher Affiliation Collaboration 1Center for Biomedical Informatics Research, Stanford University 2Department of Biomedical Data Science, Stanford University 3Department of Computer Science, Stanford University 4Stanford Medical Center, Stanford Health Care 5Stanford Center for Artificial Intelligence in Medicine and Imaging 6Prealize Health 7Technology and Digital Solutions, Stanford Health Care 8Department of Medicine, Stanford School of Medicine 9Human-Centered Artificial Intelligence Institute, Stanford University 10Department of Radiology, Stanford University 11Clinical Excellence Research Center, Stanford School of Medicine
Pseudocode No The paper describes the methodology using prose and mathematical equations (e.g., Section 3 and Section 4), but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes We also make all of our experiment code and pretrained model checkpoints available for download 1 2, to contribute to the community for continued pretraining of imaging foundation models with prognosis as added benefit. 1https://github.com/som-shahlab/tte-pretraining 2https://huggingface.co/Stanford Shah Lab
Open Datasets Yes We use two 3D medical imaging datasets (see Table 1). INSPECT (Huang et al., 2024) is a multimodal dataset of paired CT scans and radiology notes where each patient is linked to their longitudinal EHR 3 4. RSPECT (Colak et al., 2021) is an image-only dataset of CT scans annotated by radiologists for imaging biomarkers related to pulmonary embolism and cardiac function. ... 3https://stanfordaimi.azurewebsites.net/datasets?term=INSPECT 4https://redivis.com/datasets/dzc6-9jyt6gapt
Dataset Splits Yes RSPECT does not provide a public test set, so we impose a 80/5/15 split on train (n=7,279) for our experiments.
Hardware Specification Yes We utilized two compute nodes from an on-premise cluster, each with 24 Intel Xeon 2.7GHz CPU cores, 200 GB RAM and 4 Nvidia A100 GPUs or 4 H100 GPUs.
Software Dependencies Yes Logistic Regression ... software version scikit-learn 1.3.2; Deep Surv ... software version pycox 0.2.3
Experiment Setup Yes For our PEANN, we use 8 piecewise time bins and 8,192 pretraining tasks (see Appendices O,P) per the best-performing hyperparameters from Steinberg et al. (2024). Appendix F, Table 8: Hyperparameter search grids and software versions of methods under comparison. This table details hyperparameters such as learning rate, dropout probability, patch size, window size, depth, and specific settings for probing heads (Logistic Regression and Deep Surv).