NightRain: Nighttime Video Deraining via Adaptive-Rain-Removal and Adaptive-Correction

Authors: Beibei Lin, Yeying Jin, Wending Yan, Wei Ye, Yuan Yuan, Shunli Zhang, Robby T. Tan

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental From extensive experiments, our method demonstrates state-of-the-art performance. It achieves a PSNR of 26.73d B, surpassing existing nighttime video deraining methods by a substantial margin of 13.7%. Experiments, Evaluation on the Syn Night Rain Dataset, Evaluation on Real-world Datasets, Ablation Studies.
Researcher Affiliation Collaboration 1National University of Singapore 2Huawei International Pte Ltd 3Beijing Jiaotong University
Pseudocode No The paper describes its proposed methods in text and uses mathematical equations, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps formatted like code.
Open Source Code No The paper states: 'We will publicly release our collected dataset.', referring to a dataset, but it does not explicitly state that the source code for their method is released or provide a link to it.
Open Datasets Yes Syn Night Rain (Patil et al. 2022b,a) is a synthetic nighttime video deraining dataset. It includes 30 nighttime videos, each of which includes 200 frames.
Dataset Splits No The paper states: 'We follow the protocol (Patil et al. 2022b) to evaluate the effectiveness of our method, i.e., 10 videos are used as the training set and the rest 20 videos are taken as the test set.' This specifies training and test sets but does not mention a distinct validation set.
Hardware Specification No The paper does not specify the exact GPU models, CPU models, or any other specific hardware details used for running the experiments.
Software Dependencies No The paper mentions 'The Adam is used to optimize our model' but does not specify any programming languages, libraries, or frameworks with their version numbers.
Experiment Setup Yes The paper provides extensive details under 'Implementation Details', including: 'In each training step, we randomly sample P K videos, where P is the number of videos and K denotes the number of clips of a video. Each clip consists of 4 frames with the image size is 64 64. The Adam is used to optimize our model and the learning rate is set to 0.0002.' and specific settings for 'Pretraining', 'Adaptive-Rain-Removal', 'Adaptive-Correction', and 'Transformer-based Video Diffusion Model' like total training steps, sampling steps, confidence generation iterations, thresholds, data augmentation parameters (Gaussian noise variance, masking ratio), patch size, output channels, and number of transformer blocks.