Segmenting Medical MRI via Recurrent Decoding Cell
Authors: Ying Wen, Kai Xie, Lianghua He12452-12459
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The evaluation experiments on Brain Web, MRBrain S and HVSMR datasets demonstrate that the introduction of RDC effectively improves the segmentation accuracy as well as reduces the model size, and the proposed CRDN owns its robustness to image noise and intensity non-uniformity in medical MRI. |
| Researcher Affiliation | Academia | 1School of Communication and Electronic Engineering & Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China 2School of Computer Science and Technology, East China Normal University, Shanghai, China 3Department of Computer Science and Technology, Tongji University, Shanghai, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing the source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We test on two brain datasets and one cardiovascular MRI dataset: the Brain Web dataset (Cocosco et al. 1997), the MICCAI 2013 MRBrain S Challenge dataset (Mendrik et al. 2015) and the HVSMR 2016 Challenge dataset (Pace et al. 2015). |
| Dataset Splits | Yes | Brain Web is a simulated database which contains one MRI volume for normal brain with three modalities: T1, T2 and PD. It contains 399 slices, among which we choose 239 slices for training and validation, and 160 for testing. |
| Hardware Specification | Yes | An NVIDIA Ge Force RTX 2080 is used for both training and testing. |
| Software Dependencies | No | The paper mentions using 'Py Torch as the implementation framework' but does not specify its version number or other software dependencies with their versions. |
| Experiment Setup | Yes | For training settings, we adopt batch normalization (Ioffe and Szegedy 2015) after each convolutional layer... We adopt a weight decay of 10 4 and use Adam (Kingma and Ba 2014) for optimization, the learning rate starts from 6 10 4 and gradually decays when training our CRDN. |