Object-Guided Instance Segmentation for Biological Images

Authors: Jingru Yi, Hui Tang, Pengxiang Wu, Bo Liu, Daniel J. Hoeppner, Dimitris N. Metaxas, Lianyi Han, Wei Fan12677-12684

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed method achieves state-of-the-art performances on three biological datasets: cell nuclei, plant phenotyping dataset, and neural cells.
Researcher Affiliation Collaboration Jingru Yi,1 Hui Tang,2 Pengxiang Wu,1 Bo Liu,1 Daniel J. Hoeppner,3 Dimitris N. Metaxas,1 Lianyi Han,2 Wei Fan2 1Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA 2Tencent Hippocrates Research Labs, Palo Alto, CA94306, USA 3Lieber Institute for Brain Development, MD 21205, USA jy486@cs.rutgers.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release, or mention of code in supplementary materials) for the described methodology.
Open Datasets Yes DSB2018. The cell nuclei dataset DSB2018 is obtained from the training dataset of 2018 Data Science Bowl. Plant Phenotyping. The plant phenotyping dataset (Minervini et al. 2015b; 2015a) contains 473 top-down view plant images with various image sizes.
Dataset Splits Yes We randomly split the original 670 images with annotations into training (402 images), validation (134 images), and testing (134 images) datasets. We use 284 images for training, 95 images for validation, and 94 images for testing. We randomly select 386 images for training, 129 images for validation, and 129 images for testing.
Hardware Specification Yes We implement the model with Py Torch on NVIDIA M40 GPUs. Speed (FPS: frame per second) is measured on a single NVIDIA GeForce GTX 1080 GPU.
Software Dependencies No The paper mentions 'Py Torch' but does not specify a version number for it or any other software dependencies.
Experiment Setup Yes The training images are augmented using random cropping and random horizontal/vertical flipping. We set 100 epochs for training. We stop the network when the validation loss does not significantly decrease. The input resolution of training and testing images is 512 512. The weights of the backbone network are pre-trained on Image Net dataset. Other weights of the network are initialized from a standard Gaussian distribution. We use Adam with an initial learning rate of 1.25e-4 to optimize the model weights.