Exploring Fourier Prior for Single Image Rain Removal
Authors: Xin Guo, Xueyang Fu, Man Zhou, Zhen Huang, Jialun Peng, Zheng-Jun Zha
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiment, 4.1 Datasets and Settings, 4.2 Evaluations on Synthetic Data, Table 1: Comparison of average PSNR and SSIM values on synthetic benchmark datasets. |
| Researcher Affiliation | Academia | Xin Guo , Xueyang Fu , Man Zhou , Zhen Huang , Jialun Peng and Zheng-Jun Zha University of Science and Technology of China, China |
| Pseudocode | No | The paper describes the network architecture and components using diagrams and textual descriptions, but it does not provide structured pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The code will be publicly available at https://github.com/willinglucky/Exploring Fourier-Prior-for-Single-Image-Rain-Removal. |
| Open Datasets | Yes | As in [Jiang et al., 2020; Zamir et al., 2021], we train our model on 13, 712 clean-rain image pairs (for simplicity, denoted as Rain13k in the following) gathered from multiple synthetic datasets. Real-world Data To test the performance of our method in real scenarios, we conduct experiments on the testing samples of recent public real-world rainy dataset (for simplicity, denoted as RS in the following) [Quan et al., 2021], which contains nearly 150 rainy/clean image pairs for training and 100 pairs for testing. |
| Dataset Splits | No | The paper describes the training set (Rain13k) and multiple test sets (Rain100H, Rain100L, Test100, Test2800, Test1200, RS) but does not explicitly mention a separate validation dataset split with specific percentages or counts. It only mentions data augmentation and training parameters. |
| Hardware Specification | Yes | We test the MACs and inference time using 100 images with a size of 3 × 256 × 256 on one Nvidia 3090 GPU. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' and 'PSNR and SSIM' for evaluation, but does not provide specific version numbers for software dependencies like programming languages (e.g., Python), deep learning frameworks (e.g., PyTorch, TensorFlow), or other libraries. |
| Experiment Setup | Yes | The networks are trained with Adam optimizer. The learning rate is set to 2 × 10−4 by default, and decreased to 1 × 10−8 with cosine annealing strategy. our models are trained on 256 × 256 patches with a batch size of 64 for 8 × 10^5 iterations. Random flipping and rotation are applied as data augmentation. |