When AWGN-Based Denoiser Meets Real Noises
Authors: Yuqian Zhou, Jianbo Jiao, Haibin Huang, Yang Wang, Jue Wang, Honghui Shi, Thomas Huang13074-13081
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness and generalization of the proposed approach. |
| Researcher Affiliation | Collaboration | 1IFP Group, UIUC, 2University of Oxford, 3Megvii Research, 4Stony Brook University |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Codes are available at https://github.com/yzhouas/PD-Denoising-pytorch. |
| Open Datasets | Yes | For color image model, we crop 50 50 patches with stride 10 from 432 color images in the Berkeley segmentation dataset (BSD) (Roth and Black 2009). The training data ratio of single-type noises (either AWGN or RVIN) and mixed noises (AWGN and RVIN) is 1:1. During training, Adam optimizer is utilized and the learning rate is set to 10 3, and batch size is 128. After 30 epochs, the learning rate drops to 10 4 and the training stops at epoch 50. To evaluate the algorithm on synthetic noise (AWGN, mixed AWGN-RVIN and spatially-variant Gaussian), we utilize the benchmark data from BSD68, Set20 (Xu et al. 2016) and CBSD68 (Roth and Black 2009). |
| Dataset Splits | No | The paper mentions 'training data' and 'test' sets but does not specify explicit train/validation/test dataset splits (e.g., percentages or counts for each split) or reference predefined splits for reproducibility. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU/CPU models, memory details) used for running the experiments. |
| Software Dependencies | No | The paper mentions the use of 'Adam optimizer' but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | During training, Adam optimizer is utilized and the learning rate is set to 10 3, and batch size is 128. After 30 epochs, the learning rate drops to 10 4 and the training stops at epoch 50. |