Intriguing Findings of Frequency Selection for Image Deblurring

Authors: Xintian Mao, Yiming Liu, Fengze Liu, Qingli Li, Wei Shen, Yan Wang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted to acquire a thorough analysis on the insights of the method. Moreover, after plugging the proposed block into NAFNet, we can achieve 33.85 d B in PSNR on Go Pro dataset. Our method noticeably improves backbone architectures without introducing many parameters, while maintaining low computational complexity.
Researcher Affiliation Academia 1Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University 2Department of Computer Science, the Johns Hopkins University 3Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
Pseudocode No The paper describes procedural steps for its method but does not present them in a formally labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Code is available at https://github.com/Deep Med-Lab/Deep RFTAAAI2023.
Open Datasets Yes Three datasets are mainly evaluated: Go Pro (Nah, Kim, and Lee 2017), HIDE (Shen et al. 2019) and Real Blur (Rim et al. 2020) datasets.
Dataset Splits No The paper specifies training and testing sample counts for Go Pro, HIDE, Real Blur, and REDS datasets, such as 'train on 2,103 pairs of blurry and sharp images in Go Pro dataset, and test on 1,111 image pairs in Go Pro'. However, it does not explicitly mention distinct 'validation' dataset splits for all experiments.
Hardware Specification Yes We report number of parameters, FLOPs, and testing time per image (see supplementary material) on on a workstation with Intel Xeon Gold 6240C CPU, NVIDIA Ge Force RTX 3090 GPU.
Software Dependencies No The paper mentions 'Adam (Kingma and Ba 2015)' as an optimizer and implicitly uses a deep learning framework (likely PyTorch, indicated by 'torch.Tensor'), but it does not specify exact version numbers for any software dependencies.
Experiment Setup Yes I.e., the network training hyperparameters (and the default values we use) are patch size (256 256), batch size (16), training epoch (3,000), optimizer (Adam (Kingma and Ba 2015)), initial learning rate (2 10 4). The learning rate is steadily decreased to 1 10 6 using the cosine annealing strategy (Loshchilov and Hutter 2017).