Learning Single Image Defocus Deblurring with Misaligned Training Pairs
Authors: Yu Li, Dongwei Ren, Xinya Shu, Wangmeng Zuo
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our JDRL can be applied to boost defocus deblurring networks in terms of both quantitative metrics and visual quality on DPDD, Real DOF and our SDD datasets. |
| Researcher Affiliation | Academia | Yu Li1, Dongwei Ren1*, Xinya Shu1, Wangmeng Zuo1,2 1 School of Computer Science and Technology, Harbin Institute of Technology 2 Peng Cheng Laboratory |
| Pseudocode | No | The paper describes the method and components but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code, SDD dataset and the supplementary file are available at https://github.com/liyucs/JDRL. We also provide an implementation in HUAWEI Mindspore at https: //github.com/Hunter-Will/JDRL-mindspore. |
| Open Datasets | Yes | A new dataset SDD with high resolution image pairs and diverse contents is established for single image defocus deblurring, benefiting future research in this field. ... The source code, SDD dataset and the supplementary file are available at https://github.com/liyucs/JDRL. ... Currently, DPDD (Abuolaim and Brown 2020) is the most popular real-world defocus deblurring dataset. ... Lee et al. (Lee et al. 2021) built the Real DOF dataset using a dual-camera system. |
| Dataset Splits | Yes | In the DPDD dataset, there are 350/76/74 image triplets for training/testing/validation, respectively. Each blurry image comes along with a sharp image. We use 350 blurry-sharp image pairs for training and 76 blurry images for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'HUAWEI Mindspore' as an implementation platform but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | During training, λ is set as 0.35 for generating the calibration masks, and the maximal radius of blur kernels m is set as 8. The parameters of JDRL are initialized using (He et al. 2015), and are optimized using the Adam optimizer (DP and Ba 2015). The learning rate is initialized as 2 10 5 and is halved every 60 epochs. The entire training stage ends with 100 epochs. |