Online-Updated High-Order Collaborative Networks for Single Image Deraining

Authors: Cong Wang, Jinshan Pan, Xiao-Ming Wu2406-2413

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our proposed method performs favorably against eleven stateof-the-art methods on five public synthetic datasets and one real-world dataset.
Researcher Affiliation Academia 1 Department of Computing, The Hong Kong Polytechnic University 2 School of Computer Science and Engineering, Nanjing University of Science and Technology
Pseudocode Yes Algorithm 1: Online-update Learning on Real-world Data
Open Source Code No The paper does not provide an unambiguous statement or a direct link to the open-source code for the described methodology.
Open Datasets Yes We use Rain200H (Yang et al. 2017), Rain200L (Yang et al. 2017), Rain1200 (Zhang and Patel 2018), Rain1400 (Fu et al. 2017), and Rain12 (Li et al. 2016) as the synthetic datasets for training and evaluation.
Dataset Splits No The paper mentions training and testing sample counts for various datasets (e.g., 'Rain200H is the most challenging dataset, which has 1800 image pairs for training and 200 pairs for testing.'), but does not explicitly state a separate validation dataset split.
Hardware Specification Yes Our model is trained with four NVIDIA RTX TITAN GPUs on the Pytorch platform.
Software Dependencies No The paper mentions 'Pytorch platform' but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes We set the number of channels of each convolutional layer except the last one as 20, and Leaky ReLU with α = 0.2 is used after each convolutional layer except for the last one. For the last layer, we use 3x3 convolution without any activation function in B, M, and T. We randomly crop 128x128 image patches as input, and the batch size is set as 12. We use ADAM optimizer (Kingma and Ba 2015) to train the network. The initial learning rate is 0.0005, which will be divided by 10 at the 300-th and 400-th epochs, and the model training terminates after 500 epochs. We set λ = 0.0001, α1 = 1, α2 = 1, α3 = 1, β1 = 0.05, and β2 = 0.001. We train the model for 30 epochs on the real-world dataset, i.e., Epoch Real = 30.