It Has Potential: Gradient-Driven Denoisers for Convergent Solutions to Inverse Problems

Authors: Regev Cohen, Yochai Blau, Daniel Freedman, Ehud Rivlin

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here we study the performance of Algorithm 1 using our three proposed potential-driven denoisers: Gra Dn CNN, Dn ICNN, Dn DICNN. We compare ourselves to Pn P-PGD and RED-SD, applied with the popular Dn CNN denoiser [54], for the tasks of Gaussian deblurring and image super resolution.
Researcher Affiliation Industry Regev Cohen Verily Research, Israel regevcohen@google.com Yochai Blau Google Research, Israel yochaib@google.com Daniel Freedman Verily Research, Israel danielfreedman@google.com Ehud Rivlin Verily Research, Israel ehud@google.com
Pseudocode Yes Algorithm 1: Regularization by Potential-Driven Denoising
Open Source Code No The paper does not include any explicit statement about making its source code available or provide a link to a code repository.
Open Datasets Yes For training the denoising networks for blind Gaussian denoising we use the public DIV2K dataset [2] that consists of a total of 900 high resolution images, 800 for training and 100 for validation.
Dataset Splits Yes For training the denoising networks for blind Gaussian denoising we use the public DIV2K dataset [2] that consists of a total of 900 high resolution images, 800 for training and 100 for validation.
Hardware Specification Yes All experiments are performed in Tensorflow [1] where each model is trained on a single NVIDIA Tesla 32GB V100 GPU.
Software Dependencies No The paper mentions 'Tensorflow [1]' as the framework used, but does not provide specific version numbers for Tensorflow or any other software dependencies.
Experiment Setup Yes Given the datasets detailed above, we train each of the networks using an Adam optimizer for 100 epochs with a constant learning rate of 10-3. For the training loss, we use a modified version of mean squared error (MSE) cost function: Pn 1/σ2n MSE(xn − x n, Rθ(xn)).