Selective Frequency Network for Image Restoration
Authors: Yuning Cui, Yi Tao, Zhenshan Bing, Wenqi Ren, Xinwei Gao, Xiaochun Cao, Kai Huang, Alois Knoll
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed SFNet on five restoration tasks: image motion/defocus deblurring, image deraining, image dehazing, and image desnowing. More details of the used datasets and training settings for each task are provided in Appendix A. FLOPs are computed on patch size of 256 256. |
| Researcher Affiliation | Collaboration | Yuning Cui1, Yi Tao2, Zhenshan Bing1, Wenqi Ren 3,5, Xinwei Gao4, Xiaochun Cao3,5, Kai Huang3, Alois Knoll1 1Technical University of Munich 2MIT Universal Village Program 3Sun Yat-sen University 4Tencent 5Chinese Academy of Sciences |
| Pseudocode | No | The paper does not contain any explicitly labeled pseudocode or algorithm blocks. It describes the methods using text and mathematical equations. |
| Open Source Code | Yes | Our code and models are available at https://github.com/c-yn/SFNet. |
| Open Datasets | Yes | We evaluate the proposed SFNet on five restoration tasks: image motion/defocus deblurring, image deraining, image dehazing, and image desnowing. More details of the used datasets and training settings for each task are provided in Appendix A. Datasets mentioned: DPDD, RESIDE (SOTS-Indoor/Outdoor), Dense-Haze, Go Pro, HIDE, RSBlur, CSD, Rain100H, Rain100L, Test100, Test1200, Test2800. |
| Dataset Splits | No | The paper mentions training and testing on datasets but does not explicitly state the dataset splits (e.g., 80/10/10) or how validation sets were created/used for their experiments in the main text. It refers to training and then evaluating on test sets for established benchmarks. |
| Hardware Specification | Yes | We use Py Torch to implement our models on an NVIDIA Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' as the implementation framework, but does not provide specific version numbers for PyTorch or other libraries/dependencies. |
| Experiment Setup | Yes | The batch size is set as 4 with patch size of 256 256. Each patch is randomly flipped horizontally for data augmentation. The initial learning rate is 1e 4 and gradually reduced to 1e 6 with the cosine annealing (Loshchilov & Hutter, 2016). Adam (β1 = 0.9, β2 = 0.999) is used for training. N is set to 15 in Fig. 1 (c). MDSF has two branches with filter kernel sizes of 3 3 and 5 5, respectively, and the number of groups is 8. |