Label-Efficient Hybrid-Supervised Learning for Medical Image Segmentation

Authors: Junwen Pan, Qi Bi, Yanzhan Yang, Pengfei Zhu, Cheng Bian2026-2034

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two hybrid-supervised medical segmentation datasets demonstrate that with only 10% strong labels, the proposed framework can leverage the weak labels efficiently and achieve competitive performance against the 100% strong-label supervised scenario.
Researcher Affiliation Collaboration Junwen Pan*1,2, Qi Bi3*, Yanzhan Yang1, Pengfei Zhu2, Cheng Bian1 1Xiaohe Healthcare, Byte Dance 2College of Intelligence and Computing, Tianjin University 3School of Remote Sensing and Information Engineering, Wuhan University
Pseudocode Yes Algorithm 1: DII Learning Require: strongly-annotated dataset DS, weakly-annotated dataset DW , DIIs = {γk|γk [0, 1], k [1, M]}, network parameters , DII update interval , iteration steps T, and learning rates , . 1: for t 1...T do 2: Xbatch Batch Sample(DS DW ) 3: // Lower-level (DCR) gradient descent step 4: L(Xbatch, , ) 5: if (t mod ) = 0 then 6: continue 7: end if 8: 9: XS Batch Sample(DS) 10: // Estimate mean gradients on DS 11: g S L(XS, ) 12: // Calculate per-instance gradients on DW 13: gk ℓ(xk, yk, ), k {1, ..., M} 14: // Estimate inverse Hessian matrix 15: H 1 I 16: // Estimate upper-level gradients w.r.t. DIIs 17: L(XS, ( )) γk g S H 1 gk, k {1, ..., M} 18: // Upper-level gradient descent step 19: γk γk L(XS, ( )) γk , k {1, ..., M} 20: end for
Open Source Code No The paper does not provide any explicit statement or link for the open-source code of the described methodology.
Open Datasets Yes The hybrid-supervised polyp segmentation dataset has been built from two public available colonoscopic polyp datasets. The CVCEndo Scene Still (Vazquez et al. 2017) includes 912 images with elaborately annotated pixel-level labels. ... The hybrid-supervised AS-OCT segmentation dataset is modified from the training set of the Angle closure Glaucoma Evaluation (AGE) Challenge (Fu et al. 2019), which contains over 3200 AS-OCT images with annotations of the closure classification and the coordinates of scleral spurs.
Dataset Splits Yes For the AS-OCT segmentation task: 'Then, we follow the same partition protocol in which 60% of the data is used for training, 20% for validation, and the rest 20% for test.'
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. It only mentions implementing the algorithm based on the PyTorch framework.
Software Dependencies No The paper mentions 'Py Torch framework (Paszke, Gross, and et al. 2019)' and 'Deep Labv3+ structure (Chen et al. 2018)' but does not specify version numbers for these software components, which is required for reproducibility.
Experiment Setup Yes γ1, ..., γM is initialized with 0.5 and clipped to the range of [0, 1]. We adopt vanilla Adam optimizer (Kingma and Ba 2015) to tune DIIs with default betas set to 0.9 and 0.999 respectively. Network parameters are updated iteratively via mini-batch SGD with momentum=0.9, batch size=16 and weight decay=0.00005. The upper-level and lower-level learning rates are initially set to 0.1 and 0.002 by default, respectively. We finally chose the optimal configuration with τ = 400 and β = 0.1 in our whole study. ...the Dice score will increase at the beginning and reach the maximum at λ = 4...