Dual-Domain Attention for Image Deblurring
Authors: Yuning Cui, Yi Tao, Wenqi Ren, Alois Knoll
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive comparisons with prior arts on the common benchmarks show that our model, named Dual Domain Attention Network (DDANet), obtains comparable results with a significantly improved inference speed. |
| Researcher Affiliation | Academia | 1Technical University of Munich 2MIT Universal Village Program 3Shenzhen Campus of Sun Yat-sen University |
| Pseudocode | No | The paper describes the architecture and mathematical formulations for its modules but does not include any pseudocode blocks or algorithms labeled as such. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing its source code or a link to a code repository. |
| Open Datasets | Yes | Following recent works (Zamir et al. 2022; Tu et al. 2022), we utilize the Go Pro (Nah, Hyun Kim, and Mu Lee 2017) dataset that contains 2,103 blurry/sharp image pairs for training and 1,111 pairs for evaluation. |
| Dataset Splits | Yes | Following recent works (Zamir et al. 2022; Tu et al. 2022), we utilize the Go Pro (Nah, Hyun Kim, and Mu Lee 2017) dataset that contains 2,103 blurry/sharp image pairs for training and 1,111 pairs for evaluation. |
| Hardware Specification | Yes | Our experiments are performed on an NVIDIA Tesla V100 GPU and Intel Xeon Platinum 8255C CPU. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and specific learning rate strategies but does not provide specific version numbers for software dependencies like PyTorch, TensorFlow, or CUDA. |
| Experiment Setup | Yes | We train DDANet with Adam (Kingma and Ba 2014) optimizer with the initial learning rate as 1 10 4, which is reduced to 1 10 6 via the cosine annealing strategy (Loshchilov and Hutter 2016). The network is trained on 256 256 patches with a batch size of 4 for 3000 epochs, and tested on the full resolution. For data augmentation, horizontal flips are randomly applied with a probability of 0.5. The kernel size of SAM is set as 3 3. |