Nuclei Segmentation via a Deep Panoptic Model with Semantic Feature Fusion
Authors: Dongnan Liu, Donghao Zhang, Yang Song, Chaoyi Zhang, Fan Zhang, Lauren O'Donnell, Weidong Cai
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three different histopathology datasets demonstrate that our method outperforms the state-of-the-art nuclei segmentation methods and popular semantic and instance segmentation models by a large margin. |
| Researcher Affiliation | Academia | 1School of Computer Science, University of Sydney, Australia 2School of Computer Science and Engineering, University of New South Wales, Australia 3Brigham and Women s Hospital, Harvard Medical School, USA |
| Pseudocode | No | The paper provides architectural diagrams and parameter tables but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete statement or link indicating that the source code for their proposed method is publicly available. |
| Open Datasets | Yes | We used three public datasets in this study. The first dataset is from The Cancer Genome Atlas (TCGA) at 40 magnification [Kumar et al., 2017]. ... The second dataset from [Naylor et al., 2018] focuses in particular on Triple Negative Breast Cancer (TNBC). ... The third dataset is the MICCAI 2017 Digital Pathology Challenge dataset [Vu et al., 2018], also referred to as Cell17. |
| Dataset Splits | Yes | Among the 16 training images from four different organs, we randomly selected one image from each organ for validation and used the remaining 12 images for training. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU/CPU models or specific computational resources. |
| Software Dependencies | No | The paper states 'We implemented our experiments using Pytorch [Paszke et al., 2017]' but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | In all experiments, we employed stochastic gradient descent (SGD) as the optimizer with a momentum of 0.9 and a weight decay of 0.0001 to train our model. The learning rate varies in each experiment with the same linear warming up in the first 500 iterations. |