Gradient Step Denoiser for convergent Plug-and-Play
Authors: Samuel Hurault, Arthur Leclaire, Nicolas Papadakis
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that it is possible to learn such a deep denoiser while not compromising the performance in comparison to other state-of-the-art deep denoisers used in Pn P schemes. We apply our proximal gradient algorithm to various ill-posed inverse problems, e.g. deblurring, superresolution and inpainting. For all these applications, numerical results empirically confirm the convergence results. Experiments also show that this new algorithm reaches state-of-the-art performance, both quantitatively and qualitatively. |
| Researcher Affiliation | Academia | Samuel Hurault , Arthur Leclaire & Nicolas Papadakis Univ. Bordeaux, Bordeaux INP, CNRS, IMB, UMR 5251,F-33400 Talence, France |
| Pseudocode | Yes | Our complete Pn P scheme is presented in Algorithm 1. It includes a backtracking procedure on the stepsize τ that will be detailed in Section 4.2. Also, after convergence, we found it useful to apply an extra gradient step Id λτ gσ in order to discard the residual noise brought by the last proximal step Proxτf. |
| Open Source Code | Yes | Anonymous source code is given in supplementary material. It contains a README.md file that explains step by step how to run the algorithm and replicate the results of the paper. |
| Open Datasets | Yes | We use the color image training dataset proposed in Zhang et al. (2021) i.e. a combination of the Berkeley segmentation dataset (CBSD) (Martin et al., 2001), Waterloo Exploration Database (Ma et al., 2017), DIV2K dataset (Agustsson & Timofte, 2017) and Flick2K dataset (Lim et al., 2017). |
| Dataset Splits | No | The paper mentions training on '128 128 patches randomly sampled from the training images' and evaluating on CBSD68 and Set3C datasets, but it does not specify explicit train/validation/test splits (e.g., percentages or sample counts) for its experiments. |
| Hardware Specification | Yes | It takes around one week to train the model on a single Tesla P100 GPU. |
| Software Dependencies | No | The paper mentions using 'Py Torch differentiation tools' but does not specify exact version numbers for PyTorch or any other software libraries or dependencies. |
| Experiment Setup | Yes | We train the model on 128 128 patches randomly sampled from the training images, with batch size 16, during 1500 epochs. We use the ADAM optimizer with learning rate 10 4, divided by 2 every 300 epochs. ... For all noise levels, we set σ = 1.8ν, λν = ν2λ = 0.1 for motion blur ... and λν = 0.075 for static blur ... The stopping criteria are ϵ = 10 5 and K = 400. |