Spatially Covariant Lesion Segmentation

Authors: Hang Zhang, Rongguang Wang, Jinwei Zhang, Dongdong Liu, Chao Li, Jiahao Li

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply SCP to two lesion segmentation tasks, white matter hyperintensity (WMH) segmentation in magnetic resonance imaging (MRI) and liver tumor (Li T) segmentation in contrast-enhanced abdominal computerized tomography, to verify our hypothesis. The main findings of the paper are in three folds... The experimental results suggest that with SCP relaxing the spatial invariance to a certain degree, 23.8%, 64.9% and 74.7% reduction in GPU memory usage, FLOPs, and network parameter size can be achieved without compromising any segmentation accuracy.
Researcher Affiliation Academia 1Cornell University 2University of Pennsylvania 3New York University
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks. The methodology is described using mathematical equations and textual explanations.
Open Source Code No The paper does not provide any explicit statements about releasing source code, nor does it include a link to a code repository for the described methodology.
Open Datasets Yes We use a publicly available WMH segmentation challenge dataset [Kuijf et al., 2019] for our experiments. The dataset contains 60 subjects acquired from different scanners of three institutes. ... An open challenge dataset Li TS [Bilic et al., 2019] from MICCAI 2017 is adopted in our experiment. ... 1https://wmh.isi.uu.nl ... 2https://competitions.codalab.org/competitions/17094
Dataset Splits Yes In the experiments, the dataset is randomly split into three subsets for model training (42), validation (6), and testing (12). ... In the experiments, the dataset is split into three subsets for model training (91), validation (13), and testing (27).
Hardware Specification Yes We use Py Torch [Paszke et al., 2019] for all network implementations, and train them with an Nvidia RTX 2080Ti GPU.
Software Dependencies No The paper mentions using 'Py Torch [Paszke et al., 2019]' but does not specify a version number for PyTorch or any other software dependencies with version numbers.
Experiment Setup Yes Implementation Details: We slice the original volumes into a stack of consecutive images. Each volume is center-cropped to a fixed size of 160 224 to accommodate batch training, followed by Zscore based intensity normalization. T1-w and FLAIR images of each subject are concatenated through the channel dimension before being used as the input to the network. To train the network, we use Adam [Kingma and Ba, 2014] optimizer, with an initial learning rate of 10 3 (weight decay of 10 6), and a batch size of 14. The learning rate is halved at 50%, 70%, and 90% of the total training epoch (90) for optimal convergence.