Singe Image Rain Removal with Unpaired Information: A Differentiable Programming Perspective
Authors: Hongyuan Zhu, Xi Peng, Joey Tianyi Zhou, Songfan Yang, Vijay Chanderasekh, Liyuan Li, Joo-Hwee Lim9332-9339
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on public benchmark demonstrates our promising performance compared with nine state-of-the-art methods in terms of PSNR, SSIM, visual qualities and running time. |
| Researcher Affiliation | Collaboration | 1Institute for Infocomm Research, A*STAR, Singapore, 2College of Computer Science, Sichuan University, China 3Institute of Performance Computing, A*STAR, Singapore 4AI Lab, TAL Education Group, China |
| Pseudocode | No | The paper describes the model architecture and processes using natural language and mathematical equations, but it does not provide any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not include an explicit statement about the release of source code or a link to a code repository. |
| Open Datasets | Yes | We use Rain800 (Zhang and Patel 2018) for benchmarking. The Rain800 dataset contains 700 synthesized images for training and 100 images for testing using randomly sampled outdoor images. |
| Dataset Splits | Yes | The Rain800 dataset contains 700 synthesized images for training and 100 images for testing using randomly sampled outdoor images. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions that 'The entire network is trained using the Pytorch framework.' but does not specify the version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | Adam is used as optimization algorithm with a mini-batch size of 1. The learning rate starts from 0.001. The models are trained for up to 10 epochs to ensure convergence. We use a weight decay of 0.0001 and a momentum of 0.9. The entire network is trained using the Pytorch framework. During training, we set γ = 1. |