Co-training with High-Confidence Pseudo Labels for Semi-supervised Medical Image Segmentation
Authors: Zhiqiang Shen, Peng Cao, Hua Yang, Xiaoli Liu, Jinzhu Yang, Osmar R. Zaiane
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on four public medical image datasets including 2D and 3D modalities demonstrate the superiority of UCMT over the state-of-the-art. |
| Researcher Affiliation | Collaboration | Zhiqiang Shen1,2 , Peng Cao1,2 , Hua Yang3 , Xiaoli Liu 4 , Jinzhu Yang1,2 , Osmar R. Zaiane5 1College of Computer Science and Engineering, Northeastern University, Shenyang, China 2Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China 3College of Photonic and Electronic Engineering, Fujian Normal University, Fuzhou, China 4DAMO Academy, Alibaba Group, China 5Alberta Machine Intelligence Institute, University of Alberta, Edmonton, Alberta, Canada |
| Pseudocode | Yes | Algorithm 1 UCMT algorithm |
| Open Source Code | Yes | Code is available at: https://github.com/Senyh/UCMT. |
| Open Datasets | Yes | We validate our method on the ISIC dataset [Codella et al., 2018]... We evaluate the proposed method on the two public colonoscopy datasets, including Kvasir-SEG [Jha et al., 2020] and CVC-Clinic DB [Bernal et al., 2015]. We evaluate our method on the 3D left atrial (LA) segmentation challenge dataset... |
| Dataset Splits | Yes | Following [Wang et al., 2022], we adopt 1815 images for training and 779 images for validation. ... Following [Yu et al., 2019a], we split the 100 scans into 80 samples for training and 20 samples for evaluation. |
| Hardware Specification | Yes | We implement our method using Py Torch framework on a NVIDIA Quadro RTX 6000 GPU. |
| Software Dependencies | No | The paper states 'We implement our method using Py Torch framework' but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We adopt Adam W as an optimizer with the fixed learning rate of le-4. The batchsize is set to 16, including 8 labeled samples and 8 unlabeled samples. All 2D models are trained for 50 epochs, while the 3D models are trained for 1000 epochs. We empirically set λm = 1, k = 2, r = 16, α = 0.99 and β = 0.99 for our method in the experiments. |