Biological Instance Segmentation with a Superpixel-Guided Graph
Authors: Xiaoyu Liu, Wei Huang, Yueyi Zhang, Zhiwei Xiong
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three representative biological datasets demonstrate the superiority of our method over existing state-of-the-art methods. |
| Researcher Affiliation | Academia | Xiaoyu Liu1 , Wei Huang1 , Yueyi Zhang1,2 , Zhiwei Xiong1,2, 1University of Science and Technology of China 2 Institute of Artificial Intelligence, Hefei Comprehensive National Science Center |
| Pseudocode | No | The paper describes its methods using text and mathematical formulations but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/liuxy1103/BISSG. |
| Open Datasets | Yes | Plant Phenotype Images The CVPPP A1 dataset [Scharr et al., 2014] is one of the most popular instance segmentation benchmarks... Electron Microscopy Images AC3/AC4 [Kasthuri et al., 2015], CREMI, and FIB-25 [Takemura et al., 2015] are three popular Electron Microscopy (EM) datasets. ... Fluorescence Microscopy Images The BBBC039V1 dataset [Ljosa et al., 2012] contains 200 images... |
| Dataset Splits | Yes | We randomly select 20 images from the training set as the validation set. ... We use the top 226 slices of AC3 for training, the rest 30 slices for validation, and AC4 for testing. ... We use the top 50 slices for testing, the middle 60 slices for training, and the bottom 15 slices for validation. ... We use one sub-volume for training, in which the bottom 50 slices are used for validation, and use the other sub-volume for testing. ... Following the official data split, we use 100 images for training, 50 images for validation, and the rest 50 images for testing. |
| Hardware Specification | Yes | Experiments are implemented on one NVIDIA Titan XP GPU. |
| Software Dependencies | No | The paper mentions 'Adam optimizer' but does not provide specific version numbers for any software libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | We pre-train the U-Net for 200 epochs during the first phase. Then we train the GNN for 200 epochs during the second phase. We update the parameters of the GNN and the bottom decoder branch of the U-Net synchronously, but the parameters of the encoder and the top decoder branch of the U-Net are frozen in the second phase. Each EGNN layer is supervised by the RA loss and the MAS loss to update node and edge features. The initial learning rates are set as 10 4 and 10 3 for UNet and GNN respectively, decayed by a half when the loss stops improving in 30 epochs. The batch size is set as 4 and 1 for the first and second phase. The Adam optimizer is adopted during the training with β1 = 0.9 and β2 = 0.99. |