Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
Authors: Ernest Ryu, Jialin Liu, Sicheng Wang, Xiaohan Chen, Zhangyang Wang, Wotao Yin
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we present experimental results validating the theory. |
| Researcher Affiliation | Academia | 1Department of Mathematics, University of California, Los Angeles, USA 2Department of Computer Science and Engineering, Texas A&M University, USA. |
| Pseudocode | No | The paper presents mathematical formulations for the Pn P methods but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code used for experiments is available at: https://github. com/uclaopt/Provable_Plug_and_Play/ |
| Open Datasets | Yes | The training data consists of images from the BSD500 dataset, divided into 40 × 40 patches. The CNN weights were initialized in the same way as (Zhang et al., 2017a). |
| Dataset Splits | No | The paper mentions training data (BSD500) and test sets (13 images from Chan et al., 2017) but does not provide explicit training, validation, and test dataset splits with percentages or counts. |
| Hardware Specification | Yes | On an Nvidia GTX 1080 Ti, Dn CNN took 4.08 hours and real SN-Dn CNN took 5.17 hours to train, so the added cost of real SN is mild. |
| Software Dependencies | No | The paper mentions the use of the ADAM optimizer but does not provide specific version numbers for any software dependencies like programming languages or deep learning frameworks. |
| Experiment Setup | Yes | The CNN weights were initialized in the same way as (Zhang et al., 2017a). We train all networks using the ADAM optimizer for 50 epochs, with a mini-batch size of 128. The learning rate was 10^-3 in the first 25 epochs, then decreased to 10^-4. |