Improving Robustness of Deep-Learning-Based Image Reconstruction

Authors: Ankit Raj, Yoram Bresler, Bo Li

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments using the proposed min-max scheme confirm convergence to this solution. We complement the theory by experiments on non-linear Compressive Sensing (CS) reconstruction by a deep neural network on two standard datasets, and, using anonymized clinical data, on a state-of-the-art published algorithm for low-dose x-ray CT reconstruction.
Researcher Affiliation Academia 1Coordinated Science Laboratory and Department of Electrical and Computer Engineering, University of Illinois at Urbana Champaign (UIUC) 2Department of Computer Science, UIUC.
Pseudocode Yes Algorithm 1 Algorithm for training at iteration T Input: Mini-batch samples (x T , y T ), GT 1, f T 1 Output: GT and f T
Open Source Code No The paper does not provide any explicit statement or link indicating the release of open-source code for the described methodology.
Open Datasets Yes The MNIST dataset (Le Cun et al., 1998) consists of 28 28 gray-scale images of digits with 50, 000 training and 10, 000 test samples. The Celeb A dataset (Liu et al., 2015) consists of more than 200, 000 celebrity images. We used anonymized clinical CT images (Vannier, 2007) of size 512 512 884 for training & validation and 221 for evaluation.
Dataset Splits Yes The MNIST dataset (Le Cun et al., 1998) consists of 28 28 gray-scale images of digits with 50, 000 training and 10, 000 test samples. We randomly pick 160, 000 images for the training. Images from the 40, 000 held-out set are used for evaluation. We used anonymized clinical CT images (Vannier, 2007) of size 512 512 884 for training & validation and 221 for evaluation.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run its experiments.
Software Dependencies No The paper mentions using the 'Adam Optimizer' and 'Astra toolbox (Van Aarle et al., 2016)' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We used the Adam Optimizer with β1 = 0.5, β2 = 0.999, learning rate of 10 4 and mini-batch size of 128, but divided into K = 4 parts during the update of G, described in the algorithm 1. Empirically, we found λ1 = 1 and λ2 = 0.1 in (6), gave the best performance in terms of robustness (lower ˆρ) for different perturbations. We found λ1 = 3 and λ2 = 1 in (6) gave the best robustness performance (lower ˆρ) for different perturbations.