3D Self-Supervised Methods for Medical Imaging

Authors: Aiham Taleb, Winfried Loetzsch, Noel Danz, Julius Severin, Thomas Gaertner, Benjamin Bergner, Christoph Lippert

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that pretraining models with our 3D tasks yields more powerful semantic representations, and enables solving downstream tasks more accurately and efficiently, compared to training the models from scratch and to pretraining them on 2D slices. We demonstrate the effectiveness of our methods on three downstream tasks from the medical imaging domain: i) Brain Tumor Segmentation from 3D MRI, ii) Pancreas Tumor Segmentation from 3D CT, and iii) Diabetic Retinopathy Detection from 2D Fundus images. In each task, we assess the gains in data-efficiency, performance, and speed of convergence.
Researcher Affiliation Academia 1Digital Health & Machine Learning, Hasso-Plattner-Institute, Potsdam University, Germany
Pseudocode No No explicit pseudocode or algorithm blocks were found.
Open Source Code Yes We publish our implementations1 for the developed algorithms (both 3D and 2D versions) as an open-source library, in an effort to allow other researchers to apply and extend our methods on their datasets. 1https://github.com/Health ML/self-supervised-3d-tasks
Open Datasets Yes In this task, we evaluate our methods by fine-tuning the learned representations on the Multimodal Brain Tumor Segmentation (Bra TS) 2018 [61, 62] benchmark. Before that, we pretrain our models on brain MRI data from the UK Biobank [63] (UKB) corpus, which contains roughly 22K 3D scans. The Pancreas dataset contains annotated CT scans for 420 cases. Each scan in this dataset contains 3 different classes: pancreas (class 1), tumor (class 2), and background (class 0). To measure the performance on this benchmark, two dice scores are computed for classes 1 and 2. In this task, we pretrain using our proposed 3D tasks on pancreas scans without their annotation masks. Then, we fine-tune the obtained models on subsets of annotated data to assess the gains in both data-efficiency and performance. Finally, we also compare to the baseline model trained from scratch and to 2D models, similar to the previous downstream task. Fig. 3 demonstrates the gains
Dataset Splits Yes The Bra TS dataset contains annotated MRI scans for 285 training and 66 validation cases. We fine-tune on Bra TS training set, and evaluate on its validation set. We provide additional details about architectures, training procedures, the effect of augmentation in Exemplar, and how we initialize decoders for segmentation tasks in the Appendix. We should point out that we evaluate with 5-fold cross validation on this 2D dataset.
Hardware Specification No No specific hardware details such as GPU/CPU models or types were mentioned.
Software Dependencies No No specific software dependencies with version numbers were mentioned.
Experiment Setup No We provide additional details about architectures, training procedures, the effect of augmentation in Exemplar, and how we initialize decoders for segmentation tasks in the Appendix.