Alternative Baselines for Low-Shot 3D Medical Image Segmentation—An Atlas Perspective
Authors: Shuxin Wang, Shilei Cao, Dong Wei, Cong Xie, Kai Ma, Liansheng Wang, Deyu Meng, Yefeng Zheng634-642
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments Since the tasks of segmenting brain anatomical structures and abdominal organs are noticeably different, we evaluate the Siamese-Baseline on brain anatomical structures (the CANDI Dataset (Kennedy et al. 2011)), and the IDA-Baseline on abdominal organs (the Multi-organ Dataset (Gibson et al. 2018; Roth et al. 2015; Clark et al. 2013; Xu et al. 2016)). For both datasets, we randomly select 20 volumes as test data, and use the others for training. The details of both datasets can be found in the supplementary material. |
| Researcher Affiliation | Collaboration | 1 Department of Computer Science, Xiamen University, Xiamen, China 2 Tencent Jarvis Lab, Shenzhen, China 3 Department of Digestive Diseases, School of Medicine, Xiamen University, Xiamen, China 4 School of Mathematics and Statistics, Xi an Jiaotong University, Xi an, China |
| Pseudocode | No | The paper includes figures illustrating network architectures (e.g., Figure 1, Figure 2) and describes the methodology in text, but no explicit pseudocode or algorithm blocks are provided. |
| Open Source Code | No | The paper does not provide an explicit statement about the availability of open-source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We evaluate the Siamese-Baseline on brain anatomical structures (the CANDI Dataset (Kennedy et al. 2011)), and the IDA-Baseline on abdominal organs (the Multi-organ Dataset (Gibson et al. 2018; Roth et al. 2015; Clark et al. 2013; Xu et al. 2016)). |
| Dataset Splits | No | The paper states, 'For both datasets, we randomly select 20 volumes as test data, and use the others for training.' This defines the train/test split, but no specific validation split percentages or sample counts are provided. |
| Hardware Specification | Yes | We train the Siamese-Baseline and IDABaseline on one NVIDIA Ge Force RTX 2080 Ti GPU with a single pair of volumes for each batch, on a workstation with Ubuntu 18.04.2 LTS and 251 GB memory. |
| Software Dependencies | Yes | All experiments are implemented with Keras 2.2.0 (Chollet et al. 2015) and Tensor Flow 1.10.0 (Abadi et al. 2016). |
| Experiment Setup | Yes | The network is trained with the Adam (Kingma and Ba 2014) optimizer with a learning rate of 0.0002 for the Siamese Baseline for 600 epochs and 0.0001 for the IDA-Baseline for 2,000 epochs. |