Region-aware Global Context Modeling for Automatic Nerve Segmentation from Ultrasound Images

Authors: Huisi Wu, Jiasheng Liu, Wei Wang, Zhenkun Wen, Jing Qin2907-2915

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted extensive experiments on a famous public ultrasound nerve image segmentation dataset. Experimental results demonstrate that our method consistently outperforms our rivals in terms of segmentation accuracy.
Researcher Affiliation Academia 1College of Computer Science and Software Engineering, Shenzhen University 2Centre for Smart Health, The Hong Kong Polytechnic University
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes The code is available at https://github.com/jsonliu-szu/RAGCM.
Open Datasets Yes To evaluate the effectiveness of our proposed method, we conducted the experiments on the Kaggle ultrasound nerve segmentation challenge 1. This dataset is consists of 11143 ultrasound images with a resolution of 580 420, which are manually annotated by clinical experts to generate mask images. Among the 11143 samples, 4508 and 1127 images are used for training and validation respectively, while the rest images are used for testing. 1https://www.kaggle.com/c/ultrasound-nerve-segmentation
Dataset Splits Yes Among the 11143 samples, 4508 and 1127 images are used for training and validation respectively, while the rest images are used for testing.
Hardware Specification Yes We implemented our network by Pytorch (Paszke et al. 2017) on a 1 NVIDIA Ge Force RTX 2080TI (11GB memory).
Software Dependencies No The paper mentions 'Pytorch (Paszke et al. 2017)' but does not provide specific version numbers for Pytorch or any other software dependencies.
Experiment Setup Yes During the training process, our initial learning rate is 0.0001. To obtain a more smooth convergence curve, our learning rate is also multiplied by 1 iter total power with power = 0.9 after each iteration (Krogh and Hertz 1991). To speedup the network convergence, we also employed the Adam algorithm to optimize the training process. Considering that almost half of the images are background images, we also applied each mini-batch (batch size = 8) with a ratio of 1:1 of negative samples to positive samples to train our model. Our model can be converged after 70 epoches in our experiments.