How Well Do Supervised 3D Models Transfer to Medical Imaging Tasks?

Authors: Wenxuan Li, Alan Yuille, Zongwei Zhou

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our preliminary analyses indicate that the model trained only with 21 CT volumes, 672 masks, and 40 GPU hours has a transfer learning ability similar to the model trained with 5,050 (unlabeled) CT volumes and 1,152 GPU hours. More importantly, the transfer learning ability of supervised models can further scale up with larger annotated datasets, achieving significantly better performance than preexisting pre-trained models, irrespective of their pre-training methodologies or data sources. We have quantified the improved data and computational efficiency from perspectives of both pre-training (Figure 2a; 99.6% fewer data) and fine-tuning (Figure 2b; 66% less computation).
Researcher Affiliation Academia Wenxuan Li Alan Yuille Zongwei Zhou Johns Hopkins University
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Wenxuan Li Alan Yuille Zongwei Zhou Johns Hopkins University https://github.com/Mr Giovanni/Su Pre M. We have developed models (termed Su Pre M) pre-trained on data and annotations in Abdomen Atlas 1.1... we have maintained a standardized, accessible model repository for sharing public model weights as well as a suite of supervised pre-trained models (Su Pre M) released by us.
Open Datasets Yes We constructed an Abdomen Atlas 1.1 dataset comprising 9,262 three-dimensional CT volumes and over 251,323 masks spanning 25 anatomical structures and 7 types of tumors. We commit to releasing Abdomen Atlas 1.1 to the public. Abdomen Atlas 1.1 is a composite dataset that unifies CT volumes from public datasets 1–17 as summarized in Table 1.
Dataset Splits Yes The best-performing model was selected based on the highest average DSC score over 32 classes on a validation set of 1,310 CT volumes.
Hardware Specification No This work has utilized the GPUs provided partially by ASU Research Computing and NVIDIA. This statement is too general and does not provide specific hardware models or configurations.
Software Dependencies No We appreciate the effort of the MONAI Team to provide open-source code for the community. While MONAI is mentioned, no specific version number or other software dependencies with versions are provided.
Experiment Setup No Implementation details of both pre-training and fine-tuning can be found in Appendix B.2. The main text does not provide specific hyperparameters or system-level training settings.