Learning Provably Robust Estimators for Inverse Problems via Jittering

Authors: Anselm Krainovic, Mahdi Soltanolkotabi, Reinhard Heckel

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Furthermore, we examine jittering empirically via training deep neural networks (U-nets) for natural image denoising, deconvolution, and accelerated magnetic resonance imaging (MRI). The results show that jittering significantly enhances the worst-case robustness, but can be suboptimal for inverse problems beyond denoising.
Researcher Affiliation Academia Anselm Krainovic Technical University of Munich anselm.krainovic@tum.de Mahdi Soltanolkotabi University of Southern California soltanol@usc.edu Reinhard Heckel Technical University of Munich reinhard.heckel@tum.de
Pseudocode No The paper does not contain any explicit pseudocode blocks or algorithm listings.
Open Source Code Yes The repository at https://github.com/MLI-lab/robust_ reconstructors_via_jittering contains the code to reproduce all results in the main body of this paper.
Open Datasets Yes We obtain train and validation datasets {(x1, y1), . . . , (x N, y N)} of sizes 34k and 4k, respectively, from colorized images of size n = 128 128 3 generated by randomly cropping and flipping Image Net images. We use the fast MRI singlecoil knee dataset (Zbontar et al., 2018), which contains the images x and fully sampled measurements (M = I).
Dataset Splits Yes We obtain train and validation datasets {(x1, y1), . . . , (x N, y N)} of sizes 34k and 4k, respectively, from colorized images of size n = 128 128 3 generated by randomly cropping and flipping Image Net images. We process it by random subsampling at acceleration factor 4 and obtain train, validation and test datasets with approximately 31k, 3.5k and 7k slices, respectively.
Hardware Specification Yes The experimental results presented in this paper were computed using on-premise infrastructure equipped with Nvidia RTX A6000 GPUs.
Software Dependencies No The paper mentions 'Py Torch s Adam optimizer' but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes Throughout, we use Py Torch s Adam optimizer with learning rate 10 3 and batch size 50 for natural images, and 10 2 and 1 for MRI data. As perturbation levels, we consider values within the practically interesting regime of ϵ2/E h Ax 2 2 i < 0.3 for natural images and 0.03 for MRI data. We use stochastic gradient descent (SGD) with learning rate 10 2, momentum 0.9 and batch size 100.