Biomedical Image Segmentation via Representative Annotation

Authors: Hao Zheng, Lin Yang, Jianxu Chen, Jun Han, Yizhe Zhang, Peixian Liang, Zhuo Zhao, Chaoli Wang, Danny Z. Chen5901-5908

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our RA approach using three datasets (two 2D and one 3D) and show our framework yields competitive segmentation results comparing with state-of-the-art methods. Experiments To show the effectiveness and efficiency of our RA framework, we evaluate RA on two 2D datasets and one 3D dataset: the MICCAI 2015 Gland Segmentation Challenge (Gla S) dataset (Sirinukunwattana et al. 2017), a fungus dataset (Zhang et al. 2017), and the HVSMR 2016 Challenge dataset (Pace et al. 2015).
Researcher Affiliation Academia Hao Zheng, Lin Yang, Jianxu Chen, Jun Han, Yizhe Zhang, Peixian Liang, Zhuo Zhao, Chaoli Wang, Danny Z. Chen Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, USA {hzheng3, lyang5, jchen16, jhan5, yzhang29, pliang, zzhao3, cwang11, dchen}@nd.edu J. Chen is now at Allen Institute for Cell Science.
Pseudocode Yes Algorithm 1: The Representative Selection Algorithm
Open Source Code No The paper mentions using open-source tools like PyTorch and TensorFlow but does not provide any statement about releasing their own implementation code or a link to a repository.
Open Datasets Yes We evaluate RA on two 2D datasets and one 3D dataset: the MICCAI 2015 Gland Segmentation Challenge (Gla S) dataset (Sirinukunwattana et al. 2017), a fungus dataset (Zhang et al. 2017), and the HVSMR 2016 Challenge dataset (Pace et al. 2015).
Dataset Splits No The Gla S dataset contains 85 training images (37 benign (BN), 48 malignant (MT)) and 80 test images (33 BN and 27 MT in Part A, 4 BN and 16 MT in Part B), and "As in (Zhang et al. 2017), we use 4 images as the training set and 80 images as the test set." While training and test splits are mentioned, there is no explicit mention of a 'validation' split for any dataset.
Hardware Specification Yes An NVIDIA Tesla P100 GPU with 16GB GPU memory is used for both training and testing.
Software Dependencies No Our FENs and 2D FCN are implemented with Py Torch (Paszke et al. 2017) and Torch7 (Collobert, Kavukcuoglu, and Farabet 2011), respectively. Our 3D Clique Vox Net is implemented with Tensor Flow (Abadi et al. 2016). The paper mentions software tools but does not specify their version numbers (e.g., PyTorch 1.x, TensorFlow 2.x).
Experiment Setup Yes All the models are initialized using a Gaussian distribution (µ = 0, σ = 0.01) and trained with the Adam optimization (β1 = 0.9, β2 = 0.999, ϵ = 1e-10). We also adopt the poly learning rate policy with the power variable equal to 0.9 and the max iteration number equal to 50k. To leverage the limited training data, we perform data augmentation (i.e., random rotation with 90, 180, and 270 degrees, as well as image flipping along the axial planes) to reduce overfitting.