Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Non-Local U-Nets for Biomedical Image Segmentation

Authors: Zhengyang Wang, Na Zou, Dinggang Shen, Shuiwang Ji6315-6322

AAAI 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform thorough experiments on the 3D multimodality isointense infant brain MR image segmentation task to evaluate the non-local U-Nets. Results show that our proposed models achieve top performances with fewer parameters and faster computation.
Researcher Affiliation Academia Zhengyang Wang,1 Na Zou,1 Dinggang Shen,2 Shuiwang Ji1 1Texas A&M University, 2University of North Carolina at Chapel Hill EMAIL, EMAIL
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The experimental code and dataset information have been made publicly available 1. 1https://github.com/divelab/Non-local-U-Nets
Open Datasets Yes The experimental code and dataset information have been made publicly available 1. 1https://github.com/divelab/Non-local-U-Nets
Dataset Splits Yes To remove the bias of different subjects, the leave-one-subject-out cross-validation is used for evaluating segmentation performance. That is, for 10 subjects in our dataset, we train and evaluate models 10 times correspondingly. Each time one of the 10 subjects is left out for validation and the other 9 subjects are used for training.
Hardware Specification Yes The settings of our device are GPU: Nvidia Titan Xp 12GB; CPU: Intel Xeon E5-2620v4 2.10GHz; OS: Ubuntu 16.04.3 LTS.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, PyTorch 1.x).
Experiment Setup Yes Our proposed non-local U-Nets apply Dropout (Srivastava et al. 2014) with a rate of 0.5 in each global aggregation block and the output block before the final 1 1 1 convolution. A weight decay (Krogh and Hertz 1992) with a rate of 2e 6 is also employed. ... The batch size is set to 5. The Adam optimizer (Kingma and Ba 2014) with a learning rate of 0.001 is employed to perform the gradient descent algorithm. ... the patch size is set to 323 and the overlapping step size for inference is set to 8.