Harnessing Joint Rain-/Detail-aware Representations to Eliminate Intricate Rains

Authors: Wu Ran, Peirong Ma, Zhiquan He, Hao Ren, Hong Lu

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments validate the efficacy of Co IC in boosting the deraining ability of CNN and Transformer models. Co IC also enhances the deraining prowess remarkably when real-world dataset is included.
Researcher Affiliation Academia Wu Ran1,2, Peirong Ma1,2, Zhiquan He1,2, Hao Ren1,2, Hong Lu1,2 1School of Computer Science, Fudan University 2Shanghai Key Lab of Intelligent Information Processing {wran21,zqhe22}@m.fudan.edu.cn, {prma20,hren17,honglu}@fudan.edu.cn
Pseudocode Yes Algorithm 1 Co I-M for CNN layer, Py Torch-like
Open Source Code Yes 1Code is available at: https://github.com/Schizophreni/Co IC
Open Datasets Yes Synthetic and Real-world Datasets. We conduct extensive experiments utilizing five commonly adopted synthetic datasets: Rain200L & Rain200H (Yang et al., 2017), Rain800 (Zhang et al., 2019), DID-Data (Zhang & Patel, 2018), and DDN-Data (Fu et al., 2017). ... To evaluate the real-world deraining ability, we use the real-world dataset from (Wang et al., 2019) comprising 146 challenging rainy images, which we denote as Real Int. (Section 4.1) and add a real-world dataset SPAData (Wang et al., 2019)
Dataset Splits Yes Rain200L and Rain200H contain light and heavy rain respectively, each with 1800 image pairs for training and 200 for evaluation. Rain800... It has 700 pairs for training and 100 for testing. DID-Data... each with 4000/400 pairs for training/testing. DDN-Data consists of 12,600 training and 1400 testing pairs with 14 rain augmentations.
Hardware Specification Yes Experiments are implemented in Py Torch (Paszke et al., 2019) on NVIDIA Ge Force GTX 3090 GPUs.
Software Dependencies No The paper mentions 'Py Torch (Paszke et al., 2019)' but does not provide a specific version number for PyTorch or any other software dependencies.
Experiment Setup Yes The base channel number in the feature extractor is set to 32. After each downsampling operation, the channel number is doubled. All Leaky Re LU layers in the feature extractor have a negative slope of 0.1. The output dimension of the subspace projector is 128... For rain-/detail-aware contrastive learning, the number of detail-aware negative exemplars is set to Nb = 4 as suggested in (Wu et al., 2023). The blurred negative exemplars are generated using Gaussian blur with sigma uniformly sampled from interval [0.3, 1.5]. The hyper-parameter λ balancing the contribution of the contrastive loss in equation 1 is empirically set to 0.2. (Section 4.1) and BRN w/o and w/ Co IC is trained on 100 100 image patches, with a batch size of 12 for about 260k iterations. ... RCDNet w/o and w/ Co IC on image patches of size 64 64 with batch size 16 for about 260k iterations. For the large DGUNet, we train it w/o and w/ Co IC on image patches of size 128 128 with batch size 16 for about 400k iterations until converge. The Transformer model IDT w/o and w/ Co IC are trained on image patches of size 128 128 with batch size 8 for about 300k iterations. We train DRSformer on mixed synthetic datasets on 96 96 image patches with batch size 4. (Appendix A.5)