Towards Cross-Modality Medical Image Segmentation with Online Mutual Knowledge Distillation

Authors: Kang Li, Lequan Yu, Shujun Wang, Pheng-Ann Heng775-783

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation by utilizing additional MRI data and outperforms other state-of-the-art multi-modality learning methods. We extensively evaluate our method on the MM-WHS 2017 Challenge dataset (Zhuang et al. 2019).
Researcher Affiliation Academia Kang Li,1 Lequan Yu,1 Shujun Wang,1 Pheng-Ann Heng1,2 1Department of Computer Science and Engineering, The Chinese University of Hong Kong 2Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China {kli, lqyu, sjwang, pheng}@cse.cuhk.edu.hk
Pseudocode Yes Algorithm 1 Training procedure of the proposed approach Input: A batch of (xt, yt) from target-modality dataset Xt and (xa, ya) from assistant-modality dataset Xa Output: The prediction pt of input xt
Open Source Code No The paper does not provide a specific link or explicit statement about the release of its own source code.
Open Datasets Yes We extensively evaluate our method on the Multi-modality Whole Heart Segmentation Challenge 2017 (MM-WHS 2017) dataset, which contains unpaired 20 MRI and 20 CT volumes as the training data and the annotations of 7 cardiac substructures... (Zhuang et al. 2019).
Dataset Splits Yes We randomly split 20 CT volumes into two folds and perform two-fold cross validation. In each fold, we use all 20 MRI volumes and 10 CT volumes to train our network.
Hardware Specification No The paper does not provide specific details regarding the hardware used for running experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper mentions software components like 'Unet', 'residual blocks', and 'Adam optimizer' but does not provide specific version numbers for these or any other key software dependencies.
Experiment Setup Yes The hyperparameters λcyc, λ1 kd and λ2 kd are empirically set as 10, 0.5 and 1, respectively. In training, Adam optimizer is used for the optimization of generators, discriminators and segmentors with the learning rate of 2 × 10^−4, except for segmentors which adopt a decay rate of 0.9 for every two epochs.