Learning Pseudo-Contractive Denoisers for Inverse Problems
Authors: Deliang Wei, Peng Chen, Fang Li
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate superior performance of the pseudo-contractive denoiser compared to related denoisers. The proposed methods are competitive in terms of visual effects and quantitative values. |
| Researcher Affiliation | Academia | 1School of Mathematical Sciences, Key Laboratory of MEA(Ministry of Education) & Shanghai Key Laboratory of PMMP, East China Normal University, Shanghai 200241, China 2Chongqing Key Laboratory of Precision Optics, Chongqing Institute of East China Normal University, Chongqing 401120, China. |
| Pseudocode | Yes | Algorithm 1 Pn PI-GD", "Algorithm 2 Pn PI-HQS", "Algorithm 3 Pn PI-FBS", "Algorithm 4 Power iterative method", "Algorithm 5 Modified power iterative method", "Algorithm 6 Pn PI-HQS for solving Eq. (60). |
| Open Source Code | Yes | Source code The source code and pretrained models are available at https://github.com/Fizzz Fizzz/Learning-Pseudo Contractive-Denoisers-for-Inverse-Problems. |
| Open Datasets | Yes | CBSD68 (Roth & Black, 2005) and Set12 (Zhang et al., 2017a) as test sets to show the effectiveness of our method. All the experiments are conducted under Linux system, Python 3.8.12 and Pytorch 1.10.2. ... For training details, we collect 800 images from DIV2K (Ignatov et al., 2019) as the training set and crop them into small patches of size 64 64. |
| Dataset Splits | No | For training details, we collect 800 images from DIV2K (Ignatov et al., 2019) as the training set and crop them into small patches of size 64 64. ... We evaluate the Gaussian denoising performances of the proposed pseudo-contractive DRUNet (PC-DRUNet), 1 2-strictly pseudo-contractive DRUNet (SPC-DRUNet), ... on CBSD68 (Roth & Black, 2005) and Set12 (Zhang et al., 2017a) as test sets". The paper specifies training and test sets but does not explicitly mention a validation set or specific split percentages/counts for the datasets, nor does it refer to standard splits for these datasets within the text that would allow reproduction of the partitioning. |
| Hardware Specification | No | The paper only states that "All the experiments are conducted under Linux system," without providing specific details about CPU models, GPU models, memory, or other hardware components used for experiments. |
| Software Dependencies | Yes | All the experiments are conducted under Linux system, Python 3.8.12 and Pytorch 1.10.2. |
| Experiment Setup | Yes | The batch size is 32. We add the Gaussian noise with [σmin, σmax] = [0, 60] to the clean image. Adam optimizer is applied to train the model with learning rate lr = 10 4. We set r = 10 3 to ensure the regularity conditions in (13) and (14). |