Strip Attention for Image Restoration

Authors: Yuning Cui, Yi Tao, Luoxi Jing, Alois Knoll

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments To verify the effectiveness of our SANet, we conduct extensive experiments on several image restoration tasks, including single-image defocus deblurring (DPDD [Abuolaim and Brown, 2020]), image dehazing (RESIDE [Li et al., 2018]), and image desnowing (CSD [Chen et al., 2021]).
Researcher Affiliation Academia 1School of Computation, Information and Technology, Technical University of Munich, Germany 2MIT Universal Village Program, USA 3School of Computer Science, Peking University, China
Pseudocode No No pseudocode or clearly labeled algorithm block was found.
Open Source Code Yes The code is available at https://github.com/c-yn/SANet.
Open Datasets Yes We train the network on the RESIDE [Li et al., 2018] dataset and test on the SOTS [Li et al., 2018] dataset. The results are reported in Table 1. Our SANet achieves better performance with lower complexity than most approaches. Particularly on the SOTS-Outdoor dataset, SANet yields a 2.83 d B performance gain over the expensive Transformer model De Hamer [Guo et al., 2022] with only 76% MACs and 3% parameters.
Dataset Splits No We train the proposed network via Adam optimizer with β1 = 0.9, β2 = 0.999. The initial learning rate is set to 1e 4 and reduced to 1e 6 gradually with the cosine annealing. The batch size is set as 8 for the RESIDE-Outdoor [Li et al., 2018] dataset and 4 for others. Models are trained on the patch size of 256 256. We adopt only horizontal flips for data augmentation. We choose k1 = 7 and k2 = 11 in Eq. 5. According to the task complexity, we deploy varying numbers of residual blocks N in each scale for different tasks, i.e., N = 4 for image dehazing and desnowing, and N = 16 for image defocus deblurring.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were mentioned.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) were found.
Experiment Setup Yes We train the proposed network via Adam optimizer with β1 = 0.9, β2 = 0.999. The initial learning rate is set to 1e 4 and reduced to 1e 6 gradually with the cosine annealing. The batch size is set as 8 for the RESIDE-Outdoor [Li et al., 2018] dataset and 4 for others. Models are trained on the patch size of 256 256. We adopt only horizontal flips for data augmentation. We choose k1 = 7 and k2 = 11 in Eq. 5. According to the task complexity, we deploy varying numbers of residual blocks N in each scale for different tasks, i.e., N = 4 for image dehazing and desnowing, and N = 16 for image defocus deblurring.