Enhancing Pseudo Label Quality for Semi-supervised Domain-Generalized Medical Image Segmentation

Authors: Huifeng Yao, Xiaowei Hu, Xiaomeng Li3099-3107

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method sets new records on public datasets, i.e., M&Ms and SCGM. Notably, without using domain labels, our method surpasses the prior art that even uses domain labels by 11.67% on Dice on M&Ms dataset with 2% labeled data.
Researcher Affiliation Academia 1 Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology 2 Department of Computer Science and Engineering, The Chinese University of Hong Kong 3 The Hong Kong University of Science and Technology Shenzhen Research Institute
Pseudocode No The paper contains architectural diagrams (e.g., Figure 2, Figure 3) but does not include any structured pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Code is available at https://github.com/XMed-Lab/EPL Semi DG.
Open Datasets Yes We adopt the multi-centre, multi-vendor & multi-disease cardiac image segmentation (M&Ms) dataset (Campello et al. 2021) to evaluate of our method. We also adopt the spinal cord gray matter segmentation (SCGM) dataset(Prados et al. 2017) to evaluate of our method.
Dataset Splits No The paper describes using a percentage of labeled data for training (e.g., '5% labeled data', '2% labeled data', '20% labeled data') and testing on an 'unseen domain' but does not provide explicit train/validation/test dataset splits by percentage or count that are typically needed to reproduce the data partitioning for evaluation.
Hardware Specification Yes We implemented the model on Pytorch1.8 and trained it by using two NVIDIA 3090 GPUs with 377GB RAM on the Ubuntu20.04 system.
Software Dependencies Yes We implemented the model on Pytorch1.8
Experiment Setup Yes We leveraged Adam W to optimize the network with the weight decay of 0.1, the learning rate of 0.0001, and the batch size of 32. We trained the whole architecture for 20 epochs and the images were cropped to 288 288. We set β in equation 9 as three to balance the supervision loss and our proposed CACPS loss. We set λ in equation 1 as 1. For SCGM dataset, we leveraged Adam W to optimize the network with the weight decay of 0.1, the learning rate of 0.0001, and the batch size of eight. We trained the whole architecture for 50 epochs and the images were cropped to 288 288. We set β in equation 9 as 1.5 to balance the supervision loss and our proposed CACPS loss. We set λ in equation 1 as 0.8. We also adopt the random rotation, random scaling, random crop, and random flip as the data augmentation strategies.