Real-World Deep Local Motion Deblurring
Authors: Haoying Li, Ziran Zhang, Tingting Jiang, Peng Luo, Huajun Feng, Zhihai Xu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments prove the reliability of Re Lo Blur dataset, and demonstrate that LBAG achieves better performance than state-of-the-art global deblurring methods and our proposed local blur-aware techniques are effective. |
| Researcher Affiliation | Academia | 1 College of Optical Science and Engineering, Zhejiang University 2 Research Center for Intelligent Sensing Systems, Zhejiang Laboratory {lhaoying, naturezhanghn, luop, fenghj, xuzh}@zju.edu.cn, eagerjtt@zhejianglab.com |
| Pseudocode | No | The paper describes methods with figures and textual explanations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states "Re Lo Blur contains 2405 image pairs and we will release the dataset soon" and provides a project homepage link (https://leiali.github.io/Re Lo Blur homepage/index.html), but there is no explicit statement or link confirming the release of the source code for the described methodology. |
| Open Datasets | Yes | We establish the first real local motion blur dataset, Re Lo Blur, captured by a synchronized beam-splitting photographing system in daily real scenes, and corrected by a post-processing pipeline. Re Lo Blur contains 2405 image pairs and we will release the dataset soon. https://leiali.github.io/Re Lo Blur homepage/index.html |
| Dataset Splits | No | The paper states "We split the Re Lo Blur dataset into 2010 pairs for training and 395 pairs for testing" but does not explicitly provide details for a validation split. |
| Hardware Specification | Yes | For a fair comparison, we trained LBAG and the baseline deblurring methods for the same steps on 1 Ge Force RTX 3090 with 24GB of memory. |
| Software Dependencies | No | The paper mentions specific optimizers and methods (e.g., Adam, Pyflow, CMTF) but does not provide version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | We crop the images to 256 × 256 patches as the training inputs using BAPC strategy. For data augmentation, each patch is horizontally or vertically flipped with a probability of 0.5. We use Adam (Kingma and Ba 2014) as the optimizer, with a batchsize of 12 and an initial learning rate of 10−4, which is halved every 100k steps. The training procedure takes approximately 70 hours (300k steps). |