Keypoint-Augmented Self-Supervised Learning for Medical Image Segmentation with Limited Annotation
Authors: Zhangsihao Yang, Mengwei Ren, Kaize Ding, Guido Gerig, Yalin Wang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on both MRI and CT segmentation tasks, we demonstrate the architectural advantages of our proposed method in comparison to both CNN and Transformer-based UNets, when all architectures are trained with randomly initialized weights.We conduct experiments on two publicly accessible cardiac MRI datasets for the task of segmentation under limited annotation. |
| Researcher Affiliation | Academia | Zhangsihao Yang Arizona State University zshyang1106@gmail.com Mengwei Ren New York University mengwei.ren@nyu.edu Kaize Ding Northwestern University kaize.ding@northwestern.edu Guido Gerig New York University gerig@nyu.edu Yalin Wang Arizona State University ylwang@asu.edu |
| Pseudocode | No | The paper describes the methodology in text and with diagrams (Fig. 1), but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/zshyang/kaf.git. |
| Open Datasets | Yes | We conduct experiments on two publicly accessible cardiac MRI datasets for the task of segmentation under limited annotation. CHD (Congenital Heart Disease) [67] is a CT dataset consisting of 68 3D cardiac images... ACDC (Automatic Cardiac Diagnosis Challenge) [2] is a MRI dataset consists of cardiac images from 100 patients... We additionally include results on a non-cardiac CT dataset Synapse1 in the appendix to illustrate the generalization of our method. 1https://www.synapse.org/#!Synapse:syn3193805/wiki/217789 |
| Dataset Splits | Yes | For each cross-validation fold, Nimages are held-out for validation, and varying M images from the remaining images are used for the few-shot training.We quantify the segmentation performance via the Dice coefficient with five-fold cross validataion following the set up in [72]. |
| Hardware Specification | No | The paper discusses model configuration and computational analysis (GFLOPs and GPU memory usage) in general terms but does not specify the exact hardware (e.g., specific GPU or CPU models, memory details) used to run the experiments. |
| Software Dependencies | No | The paper mentions the use of 'SGD optimizer' and 'Adam [33] optimizer' and building the framework on '2D UNet', but does not provide specific version numbers for software dependencies like Python, PyTorch, TensorFlow, or other libraries. |
| Experiment Setup | Yes | For network configuration, we build our framework on 2D UNet, and set the starting number of channels of the network as 32 for CHD, and 48 for ACDC. ... For pretraining, we assign loss weights w1, w2, w3 to 1.0, 1.0, and 0.01, respectively. We employ the SGD optimizer with a learning rate of 0.002 and batch sizes of 32 for CHD and ACDC. ... We pretrain the model for 50 epochs. For finetuning, we use the standard cross-entropy loss with the Adam [33] optimizer, with learning rates of to 5 10 5 for CHD, 5 10 4 for ACDC. The batch size is set to 10, and we finetune on CHD for 100 epochs and on ACDC for 200 epochs. |