MNet: Rethinking 2D/3D Networks for Anisotropic Medical Image Segmentation

Authors: Zhangfu Dong, Yuting He, Xiaoming Qi, Yang Chen, Huazhong Shu, Jean-Louis Coatrieux, Guanyu Yang, Shuo Li

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments are performed on four public datasets (CT&MR), the results consistently demonstrate the proposed MNet outperforms the other methods.
Researcher Affiliation Academia 1LIST, Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing, China 2Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing 3Centre de Recherche en Information Biom edicale Sino-Franc ais (CRIBs) 4Dept. of Medical Biophysics, University of Western Ontario, London, ON, Canada
Pseudocode No The paper describes the architecture and processes, but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes The code and datasets are available at: https://github.com/zfdong-code/MNet
Open Datasets Yes Four widely used public datasets, which involve multiple modalities, are selected for comprehensive evaluations. Two CT datasets: 1) The Liver and Liver Tumor Segmentation challenge 2017 (Li TS)... 2) The Kidney and Kidney Tumor Segmentation challenge 2019 (Ki TS)... Two MR datasets: 1) The Multimodal Brain Tumor Segmentation Challenge 2020 (Bra TS)... 2) The T2 MR dataset of the PROMISE challenge 2012...
Dataset Splits No Each dataset is randomly split into the training set (80%) and testing set (20%).
Hardware Specification Yes The experiments are performed with a 32GB V100 GPU.
Software Dependencies No Networks implemented with Mind Spore1 and Py Torch are available at: https://github.com/zfdongcode/MNet.
Experiment Setup Yes The stochastic gradient descent (SGD) with a momentum of 0.99 is selected as the optimizer. The initial learning rate (0.01) is gradually reduced according to the poly learning rate policy [Chen et al., 2018], and the maximum epoch is set to 500. Following the default setting of nn U-Net, the batch sizes for Li TS, Ki TS, Bra TS, and PROMISE are 2, 2, 4, 2, while the patch sizes are set to 40 224 192, 32 224 224, 28 192 160, 16 320 320 respectively.