Meta Segmentation Network for Ultra-Resolution Medical Images

Authors: Tong Wu, Bicheng Dai, Shuxin Chen, Yanyun Qu, Yuan Xie

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on two challenging ultra-resolution medical datasets BACH and ISIC show that MSN achieves the best performance compared with state-of-theart approaches.
Researcher Affiliation Academia Tong Wu1 , Bicheng Dai1 , Shuxin Chen1 and Yanyun Qu1 and Yuan Xie2 1Fujian Key Laboratory of Sensing and Computing for Smart City, School of Informatics, Xiamen University, Fujian, China 2School of Computer Science and Technology, East China Normal University, Shanghai, China {tongwu, nejordai, chenshuxin}@stu.xmu.edu.cn, yyqu@xmu.edu.cn, yxie@cs.ecnu.edu.cn
Pseudocode No The paper describes the proposed method through text and architectural diagrams but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the described methodology.
Open Datasets Yes BACH [Aresta et al., 2019] is composed of Hematoxylin and Eosin (H&E) stained breast histology microscopy and wholeslide images (WSI). [...] ISIC [Tschandl et al., 2018; Codella et al., 2018] is an ultra-resolution medical dataset for pigmented skin lesions, which total contains 2596 images.
Dataset Splits Yes We randomly split 10 WSIs into 7, 1, 2 images for the training set, the sub-training set and the test set, respectively. [...] We randomly divide the dataset into training, sub-training and testing sets with 2077, 360 and 157 images.
Hardware Specification Yes The whole model is trained in Py Torch [Ketkar, 2017] with a single 1080Ti GPU.
Software Dependencies No The paper mentions PyTorch and the Adam optimizer, but does not provide specific version numbers for these software dependencies (e.g., PyTorch 1.x or Adam version).
Experiment Setup Yes We train the meta-branch for 30 epochs, and tune the non-meta-branches as well as Meta-FM for only 10 epochs, with the batch size of 32. The optimizer Adam [Kingma and Ba, 2014] is utilized with an initial learning rate 0.0001 to update the parameters of the network.