Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

It Has Potential: Gradient-Driven Denoisers for Convergent Solutions to Inverse Problems

Authors: Regev Cohen, Yochai Blau, Daniel Freedman, Ehud Rivlin

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here we study the performance of Algorithm 1 using our three proposed potential-driven denoisers: Gra Dn CNN, Dn ICNN, Dn DICNN. We compare ourselves to Pn P-PGD and RED-SD, applied with the popular Dn CNN denoiser [54], for the tasks of Gaussian deblurring and image super resolution.
Researcher Affiliation Industry Regev Cohen Verily Research, Israel EMAIL Yochai Blau Google Research, Israel EMAIL Daniel Freedman Verily Research, Israel EMAIL Ehud Rivlin Verily Research, Israel EMAIL
Pseudocode Yes Algorithm 1: Regularization by Potential-Driven Denoising
Open Source Code No The paper does not include any explicit statement about making its source code available or provide a link to a code repository.
Open Datasets Yes For training the denoising networks for blind Gaussian denoising we use the public DIV2K dataset [2] that consists of a total of 900 high resolution images, 800 for training and 100 for validation.
Dataset Splits Yes For training the denoising networks for blind Gaussian denoising we use the public DIV2K dataset [2] that consists of a total of 900 high resolution images, 800 for training and 100 for validation.
Hardware Specification Yes All experiments are performed in Tensorflow [1] where each model is trained on a single NVIDIA Tesla 32GB V100 GPU.
Software Dependencies No The paper mentions 'Tensorflow [1]' as the framework used, but does not provide specific version numbers for Tensorflow or any other software dependencies.
Experiment Setup Yes Given the datasets detailed above, we train each of the networks using an Adam optimizer for 100 epochs with a constant learning rate of 10-3. For the training loss, we use a modified version of mean squared error (MSE) cost function: Pn 1/σ2n MSE(xn − x n, Rθ(xn)).