R²MRF: Defocus Blur Detection via Recurrently Refining Multi-Scale Residual Features
Authors: Chang Tang, Xinwang Liu, Xinzhong Zhu, En Zhu, Kun Sun, Pichao Wang, Lizhe Wang, Albert Zomaya12063-12070
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed network on two commonly used defocus blur detection benchmark datasets by comparing it with other 11 stateof-the-art methods. Extensive experimental results with ablation studies demonstrate that R2MRF consistently and significantly outperforms the competitors in terms of both efficiency and accuracy. |
| Researcher Affiliation | Collaboration | 1School of Computer Science, China University of Geosciences, Wuhan 430074, China 2School of Computer Science, National University of Defense Technology, Changsha 410073, China 3College of Mathematics, Physics and Information Engineering, Zhejiang Normal University, Jinhua 321004, China 4Alibaba Group (U.S.) Inc, Bellevue, WA 98004, USA 5School of Information Technologies, University of Sydney, NSW 2006, Australia |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. It provides network architecture diagrams instead. |
| Open Source Code | Yes | https://github.com/Chang Tang/R2MRF |
| Open Datasets | Yes | Shi et al. s dataset (Shi, Xu, and Jia 2014), which contains the rest 100 defocus blurred images as mentioned above. DUT (Zhao et al. 2018), which is a new defocus blur detection dataset which consists of 500 images with pixel-wise annotations. |
| Dataset Splits | No | The paper states: 'We divide the 704 defocus blurred images into two parts, i.e., 604 for training and the remaining 100 ones for testing.' It does not explicitly specify a separate validation dataset split. |
| Hardware Specification | Yes | We train our model on a machine equipped with an Intel 3.4GHz CPU with 32G memory and a Nvidia GTX 1080Ti GPU. As to the testing phase, we use only one Nvidia GTX 1080Ti GPU. |
| Software Dependencies | No | The paper mentions 'Pytorch framework' but does not specify its version number or versions for other key software libraries. |
| Experiment Setup | Yes | We optimize the whole network by using Stochastic gradient descent (SGD) algorithm with the momentum of 0.9 and the weight decay of 0.0005. The learning rate is adjusted by the poly policy with the power of 0.9. The training batch size is set to 4 and the whole learning process stops after 6k iterations. |