Rethinking Semi-Supervised Medical Image Segmentation: A Variance-Reduction Perspective

Authors: Chenyu You, Weicheng Dai, Yifei Min, Fenglin Liu, David Clifton, S. Kevin Zhou, Lawrence Staib, James Duncan

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings, and our methods consistently outperform state-of-the-art semi-supervised methods.
Researcher Affiliation Academia 1Yale University 2University of Oxford 3University of Science and Technology of China
Pseudocode No The paper describes methods and processes through text and mathematical equations but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes 1Codes are available on here. 5Codes are available on here.
Open Datasets Yes Our experiments are conducted on five 2D/3D representative datasets in semi-supervised medical image segmentation literature, including 2D benchmarks (i.e., ACDC [81], Li TS [82], and MMWHS [83]) and 3D benchmarks (i.e., LA [84] and in-house MP-MRI).
Dataset Splits Yes The ACDC dataset... We utilize 120, 40, and 40 scans for training, validation, and testing.
Hardware Specification Yes All experiments are conducted with Py Torch [101] on an NVIDIA RTX 3090 Ti. Hardware: Single NVIDIA Ge Force RTX 3090 GPU;
Software Dependencies Yes All experiments are conducted with Py Torch [101] on an NVIDIA RTX 3090 Ti. Software: Py Torch 1.10.2+cu113, and Python 3.8.11).
Experiment Setup Yes We adopt an SGD optimizer with momentum 0.9 and weight decay 10 4. The initial learning rate is set to 0.01. For pre-training, the networks are trained for 100 epochs with a batch size of 6. As for fine-tuning, the networks are trained for 200 epochs with a batch size of 8. The learning rate decays by a factor of 10 every 2500 iterations during the training. We apply the temperature with τt =0.01, τs =0.1, and τ =0.5, respectively. The size of the memory bank is set to 36. For the CL training, we use the implementation from [17] and leave all parameters on their default settings, e.g., we apply the hyperparameters with λ1 =0.01, λ2 =1.0, and λ3 =1.0.