Non-Local Context Encoder: Robust Biomedical Image Segmentation against Adversarial Attacks

Authors: Xiang He, Sibei Yang, Guanbin Li, Haofeng Li, Huiyou Chang, Yizhou Yu8417-8424

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both lung and skin lesion segmentation datasets have demonstrated that NLCEN outperforms any other state-of-the-art biomedical image segmentation methods against adversarial attacks.
Researcher Affiliation Collaboration 1School of Data and Computer Science, Sun Yat-sen University, China 2The University of Hong Kong, Hong Kong 3Deepwise AI Lab, China
Pseudocode No The paper describes the architecture and operations using mathematical formulas and textual descriptions, but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about making the source code available, nor does it include a link to a code repository.
Open Datasets Yes We have conducted evaluations on two commonly used benchmark biomedical image datasets, the Japanese Society of Radiological Technology (JSRT) dataset for lung segmentation (Shiraishi et al. 2000) and the International Symposium on Biomedical Imaging (ISBI 2016) dataset for skin lesion segmentation (Gutman et al. 2016).
Dataset Splits Yes We split chest radiographs into a training set of 124 images and a test set of 123 images by following previous practices in the literature (Hwang and Park 2017). The ISBI 2016 dataset provides 900 training images and 379 testing images with binary masks of skin lesion.
Hardware Specification Yes It takes 2 hours to train a model on the JSRT dataset in a single NVIDIA TITAN GPU and 2 more hours to generate adversarial samples for testing when an intensity of adversarial perturbation is given.
Software Dependencies No Our proposed NLCEN with NLCE modules has been implemented on the open source deep learning framework, Py Torch(Paszke et al. 2017). While PyTorch is mentioned, no specific version number for PyTorch itself or any other software dependencies is provided.
Experiment Setup Yes We set the mini-batch size to 8, and all input images are resized to 256 256. The Adam optimizer is adopted to update network parameters with the learning rate set to 0.001 initially and reduced by 10% whenever the training loss stops decreasing until 0.0001. We use a weight decay of 0.0001 and an exponential decay rate for the first moment estimates and the second moment estimates of 0.9 and 0.999 respectively.