FMRNet: Image Deraining via Frequency Mutual Revision
Authors: Kui Jiang, Junjun Jiang, Xianming Liu, Xin Xu, Xianzheng Ma
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our proposed FMRNet delivers significant performance gains for seven datasets on image deraining task, surpassing the state-of-the-art method ELFormer by 1.14 d B in PSNR on the Rain100L dataset, while with similar computation cost. |
| Researcher Affiliation | Academia | 1 School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China 2 School of Computer Science and Technology, Wuhan University of Science and Technology Wuhan, China 3 Active Vision Lab, University of Oxford, United Kingdom {jiangkui, jiangjunjun, csxm}@hit.edu.cn, xuxin@wust.edu.cn, maxianzheng@whu.edu.cn |
| Pseudocode | No | The paper includes architectural diagrams (Figure 2) but no explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and retrained models are available at https://github.com/kuijiang94/FMRNet. |
| Open Datasets | Yes | Following (Jiang et al. 2020), we use 13, 700 clean/rain image pairs from (Zhang, Sindagi, and Patel 2020; Fu et al. 2017b) for training all compared methods to guarantee fairness since these methods are originally trained with different datasets. |
| Dataset Splits | No | The paper describes the data used for training and separate datasets for testing but does not provide specific train/validation/test splits (e.g., percentages or counts for a validation set) from the training data. |
| Hardware Specification | Yes | We use Adam optimizer with the learning rate (2 10 4 with the decay rate of 0.8 at every 80 epochs till 500 epochs) and batch size (8) to train FMRNet on a single NVIDIA 3090 GPU. |
| Software Dependencies | No | The paper does not explicitly list specific software dependencies with their version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | In our baseline, the number of multi-level mutuality fusion module (MMFM) is empirically set to 10. To obtain training samples, the training images are coarsely cropped into small 256 256 patches. We use Adam optimizer with the learning rate (2 10 4 with the decay rate of 0.8 at every 80 epochs till 500 epochs) and batch size (8) to train FMRNet on a single NVIDIA 3090 GPU. |