Self-Supervised Image Restoration with Blurry and Noisy Pairs
Authors: Zhilu Zhang, RongJian Xu, Ming Liu, Zifei Yan, Wangmeng Zuo
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic and real-world images show the effectiveness and practicality of the proposed method. Extensive experiments on synthetic data are conducted to evaluate our Self IR. Both quantitative and qualitative results show that Self IR outperforms the state-of-the-art self-supervised denoising methods, as well as the supervised denoising and deblurring counterparts. |
| Researcher Affiliation | Academia | 1 Harbin Institute of Technology, Harbin, China; 2 Peng Cheng Laboratory, China |
| Pseudocode | No | The paper describes the method using text and mathematical equations, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Codes are available at https://github.com/cszhilu1998/Self IR. |
| Open Datasets | Yes | For synthesizing blurry images... Recently, the Go Pro dataset2 [25] offers a more realistic way to synthesize blurry images, which has been widely adopted for motion deblurring tasks [6,25,38,50,51]. 2https://seungjunnah.github.io/Datasets/gopro.html. The Go Pro dataset used for synthetic experiments is public under CC BY 4.0 license. |
| Dataset Splits | No | The paper states: 'Finally, there are 2,103 image pairs for training, and we use the remaining 1,111 pairs for testing.' It does not explicitly mention a separate validation split or how validation was performed. |
| Hardware Specification | Yes | All experiments are conducted with Py Torch [28] on an Nvidia Ge Force RTX 2080Ti GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch [28]' but does not specify its version number or other software dependencies with specific version numbers. |
| Experiment Setup | Yes | During training, the batch size is set to 16 and the patch size is 128x128. Adam optimizer [12] with β1 = 0.9 and β2 = 0.999 is used to train the network for 200 epochs. The learning rate is initially set to 3 × 10−4 for synthetic experiments and 1 × 10−4 for real-world experiments. And it reduces by half every 50 epochs. For the hyper-parameters in Eqn. (12), λaux is set to 2, λreg is set to 2 and 4 for experiments in s RGB space and raw-RGB space, respectively. |