Generative Adaptive Convolutions for Real-World Noisy Image Denoising
Authors: Ruijun Ma, Shuyi Li, Bob Zhang, Zhengming Li1935-1943
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate the superior denoising performances of the proposed FADNet versus the state-of-the-art. In contrast to the existing deep denoisers, our FADNet is not only flexible and efficient, but also exhibits a compelling generalization capability, enjoying tremendous potential for practical usage. |
| Researcher Affiliation | Academia | Ruijun Ma1, 2, Shuyi Li1, Bob Zhang1 , Zhengming Li2 1 PAMI Research Group, Department of Computer and Information Science, University of Macau 2 Guangdong Industrial Training Center, Guangdong Polytechnic Normal University |
| Pseudocode | No | The paper describes the network architecture and method steps but does not include formal pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not provide any explicit statement about open-source code availability or a link to a code repository. |
| Open Datasets | Yes | In this work, the training data was from SIDD (Abdelhamed, Lin, and Brown 2018). |
| Dataset Splits | Yes | SIDD also provided a medium version package, in which 320 images pairs were leveraged for fast training and 1280 images pairs for validation purposes. |
| Hardware Specification | Yes | All the experiments were carried out using the Pytorch library (Paszke et al. 2019) on a machine with an NVIDIA Titan Xp GPU. |
| Software Dependencies | No | The paper mentions PyTorch and Adam optimizer but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | We utilized the Adam optimizer (Kingma and Ba 2014) to update the network, with β1 = 0.7, β2 = 0.999, and ϵ = 10 8. The learning rate was initially set as 0.001 and reduced to 0.0001 when the training errors held steady. |