Robust One-Shot Segmentation of Brain Tissues via Image-Aligned Style Transformation

Authors: Jinxin Lv, Xiaoyu Zeng, Sheng Wang, Ran Duan, Zhiwei Wang, Qiang Li

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on two public datasets demonstrate 1) a competitive segmentation performance of our method compared to the fully-supervised method, and 2) a superior performance over other state-of-the-art with an increase of average Dice by up to 4.67%.
Researcher Affiliation Academia 1 Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China 2 Mo E Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
Pseudocode No The paper describes the method and uses figures to illustrate concepts, but it does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The source code is available at: https://github.com/Jinx Lv/One-shot-segmentation-via-IST.
Open Datasets Yes OASIS The dataset (Simpson et al. 2019) contains 414 scans of T1 brain MRI with an image size of 256 256 256 and a voxel spacing of 1 1 1 mm. The ground-truth masks for 35 brain tissues are obtained by Free Sufer (Fischl 2012) and SAMSEG (Puonti, Iglesias, and Van Leemput 2016). CANDIShare The dataset (Kennedy et al. 2012) contains 103 scans of T1 brain MRI with an image size ranging from 256 256 128 to 256 256 158, and the voxel spacing is around 1 1 1.5 mm.
Dataset Splits No We randomly divided the data in each dataset into training and test sets, obtaining 331 training and 83 test images in OASIS, and 83 training and 20 test images in CANDIShare. The paper mentions training and test sets, but does not explicitly specify a validation set split.
Hardware Specification Yes All training and testing were performed on a GPU resource of NVIDIA RTX 3090, and a CPU resource of Intel Xeon Gold 5220R.
Software Dependencies No We implemented our method based on Tensorflow (Abadi et al. 2016) and used the Adam optimizer to train the network. The paper mentions Tensorflow but does not specify its version number or versions for any other libraries/packages.
Experiment Setup Yes For both the unsupervised (initial) and weakly supervised (iterative) training phase of the reg-model, the learning rate was set to 1 10 4, and the training was performed for 40,000 steps. We trained the seg-model for 20,000 steps with a learning rate of 1 10 3. During training, we performed random spatial transformations including affine and B-spline transformations on each training image to enhance the robustness of the network.