Rethinking Data Augmentation for Single-Source Domain Generalization in Medical Image Segmentation

Authors: Zixian Su, Kai Yao, Xi Yang, Kaizhu Huang, Qiufeng Wang, Jie Sun

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results validate that SBF could achieve a substantially lower source risk compared with the existing random/no fusion mechanism, leading to a tighter generalization bound in our setting. In summary, we make the following contributions: ... Combining the two strategies, our Saliency-balancing Location-scale Augmentation (SLAug) for SDG achieves superior performance in two segmentation tasks, demonstrating its outstanding generalization capability.
Researcher Affiliation Academia 1University of Liverpool, Liverpool, the United Kingdom 2School of Advanced Technology, Xi an Jiaotong-Liverpool University (XJTLU), Suzhou, China 3Data Science Research Center, Duke Kunshan University, Kunshan, China
Pseudocode Yes Algorithm 1: Saliency-balancing Location-scale Augmentation
Open Source Code Yes Code is available at https://github.com/Kaiseem/SLAug.
Open Datasets Yes We evaluate our method on two datasets, cross-modality abdominal dataset (Landman et al. 2015; Kavur et al. 2021) and cross-sequence cardiac dataset (Zhuang et al. 2020).
Dataset Splits Yes The detailed split of dataset and the preprocessing steps follow the instructions given by Ouyang et al. (2020), which can be found in the given code.
Hardware Specification Yes We implemented our framework on a workstation equipped with one NVIDIA Ge Force RTX 3090 GPU (24G memory).
Software Dependencies No The paper mentions 'Adam' as the optimizer but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes The grid size g is empirically set to 3 and 18 for abdominal and cardiac datasets respectively. Adam (Kingma and Ba 2014) is used as the optimizer with an initial learning rate of 3 10 4 and weight decay of 3 10 5. The learning rate remains unchanged for the first 50 epochs and linearly decays to zero over the next 1,950 epochs. For all experiments, batch size is set to 32 and the methods are evaluated at the 2,000th epoch.