Generative Status Estimation and Information Decoupling for Image Rain Removal

Authors: Di Lin, Xin WANG, Jia Shen, Renjie Zhang, Ruonan Liu, Miaohui Wang, Wuyuan Xie, Qing Guo, Ping Li

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate SEIDNet on the public datasets, achieving state-of-the-art performances of image rain removal. The experimental results also demonstrate the generalization of SEIDNet, which can be easily extended to achieve state-of-the-art performances on other image restoration tasks (e.g., snow, haze, and shadow removal).
Researcher Affiliation Academia Di Lin1, , Xin Wang2, , Jia Shen1, Renjie Zhang2, Ruonan Liu1, Miaohui Wang3, Wuyuan Xie3, Qing Guo4, and Ping Li2,* 1Tianjin University, China 2The Hong Kong Polytechnic University, Hong Kong 3Shenzhen University, China 4Center for Frontier AI Research, A*STAR, Singapore
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes We release the implementation of SEIDNet via https://github.com/wxxx1025/SEIDNet.
Open Datasets Yes We evaluate SEIDNet on the public datasets for rain removal (i.e., Rain100H [16], Rain100L [16], Rain1400 [19], Rain13K [10] and SPA [3]), achieving state-of-the-art performances.
Dataset Splits No The paper provides training and testing split sizes for datasets (e.g., '1, 800/200/12, 600/638, 492 images for training, along with 100/100/1, 400/1, 000 images for testing.') but does not explicitly mention a validation set split.
Hardware Specification No The main paper does not explicitly describe the specific hardware (e.g., GPU/CPU models) used for running its experiments. The checklist refers to supplementary material for this information, which is not provided in the current text.
Software Dependencies No The main paper does not provide specific software dependencies with version numbers. The checklist refers to supplementary material for implementation details, which is not provided in the current text.
Experiment Setup Yes We define α = 4 as the weight of KL divergence. L2-norm and KL divergence compose the status estimation loss Lse as: Lse = L2(R, R ) + αKL(G(µr, σr) || G(µf, σf)).