Towards Generic Semi-Supervised Framework for Volumetric Medical Image Segmentation

Authors: Haonan Wang, Xiaomeng Li

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our proposed framework on four benchmark datasets for SSL, Class-imbalanced SSL, UDA and Semi DG. The results showcase notable improvements compared to state-of-the-art methods across all four settings, indicating the potential of our framework to tackle more challenging SSL scenarios.
Researcher Affiliation Academia Haonan Wang1, Xiaomeng Li1,2 1The Hong Kong University of Science and Technology 2HKUST Shenzhen-Hong Kong Collaborative Innovation Research Institute, Futian, Shenzhen
Pseudocode Yes Algorithm 1: Training Pipeline of A&D.
Open Source Code Yes Code and models are available at: https://github.com/xmed-lab/Generic SSL.
Open Datasets Yes We evaluate our proposed A&D framework on four datasets for four tasks, i.e., LASeg dataset [72] for SSL, Synapse dataset [73] for class imbalanced SSL, MMWHS dataset [74] for UDA, and M&Ms dataset [75] for Semi DG.
Dataset Splits Yes For the Synapse dataset... We randomly split them as 20,4 and 6 scans for training, validation, and testing, respectively. For the LASeg dataset, we split the 100 scans into 80 for training and 20 for evaluation.
Hardware Specification Yes We implement the proposed framework with Py Torch, using a single NVIDIA A100 GPU.
Software Dependencies No We implement the proposed framework with Py Torch, using a single NVIDIA A100 GPU.
Experiment Setup Yes The network parameters are optimized by SGD with Nesterov and momentum of 0.9. We employ a poly decay strategy follow [69]. For more implementation details, e.g., data preprocessing, learning rates, batch sizes, etc., please refer to the Appendix. We evaluate the prediction of the network with two metrics, including Dice and the average surface distance (ASD).Table 8: Hyper-parameters for different datasets. Datasets patch size learning rate batch size feature size F LASeg 112 112 80 1e-2 4 32 Synapse 64 128 128 3e-2 4 32 MMWHS 128 128 128 5e-3 2 32 M&Ms 32 128 128 1e-2 4 32