Convergent Bregman Plug-and-Play Image Restoration for Poisson Inverse Problems

Authors: Samuel Hurault, Ulugbek Kamilov, Arthur Leclaire, Nicolas Papadakis

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental evaluations conducted on various Poisson inverse problems validate the convergence results and showcase effective restoration performance.
Researcher Affiliation Academia Samuel Hurault Univ. Bordeaux, CNRS, INRIA, Bordeaux INP, IMB, UMR 5251 samuel.hurault@math.u-bordeaux.fr Ulugbek Kamilov Washington University in St. Louis kamilov@wustl.edu Arthur Leclaire Univ. Bordeaux, CNRS, INRIA, Bordeaux INP, IMB, UMR 5251 LTCI, Télécom Paris, IP Paris arthur.leclaire@telecom-paris.fr Nicolas Papadakis Univ. Bordeaux, CNRS, INRIA, Bordeaux INP, IMB, UMR 5251 nicolas.papadakis@math.u-bordeaux.fr
Pseudocode No The paper describes algorithms using mathematical equations and textual explanations (e.g., equations (18) and (20)), but it does not include explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We use the same training dataset as [Zhang et al., 2021]." and "Average denoising PSNR performance of Inverse Gamma noise denoisers B-DRUNet and DRUNet on 256 × 256 center-cropped images from the CBSD68 dataset, for various noise levels γ.
Dataset Splits No The paper states that hyperparameters are optimized by grid search, and refers to using the same training dataset as [Zhang et al., 2021], but does not explicitly provide the training/validation/test dataset splits used in their own experiments.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory details) used to run its experiments.
Software Dependencies No The paper mentions using the DRUNet architecture and ADAM optimizer but does not specify versions for core software dependencies like programming languages, deep learning frameworks (e.g., PyTorch, TensorFlow), or CUDA.
Experiment Setup Yes Training is performed with ADAM during 1200 epochs. The learning rate is initialized with learning rate 10−4 and is divided by 2 at epochs 300, 600 and 900. The algorithm terminates when the relative difference between consecutive values of the objective function is less than 10−8 or the number of iterations exceeds K = 500. The hyper-parameters γ, λ are optimized for each algorithm and for each noise level α by grid search. Initialization is done with x0 = ATy. Table 2: B-RED and B-Pn P hyperparameters