An Unsupervised Deep Learning Approach for Real-World Image Denoising

Authors: Dihan Zheng, Sia Huat Tan, Xiaowen Zhang, Zuoqiang Shi, Kaisheng Ma, Chenglong Bao

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world noisy image datasets have shown that the combination of neural networks and Gaussian denoisers improves the performance of the original Gaussian denoisers by a large margin.
Researcher Affiliation Collaboration Dihan Zheng1, Sia Huat Tan2, Xiaowen Zhang3, Zuoqiang Shi4,5, Kaisheng Ma2, Chenglong Bao1,5 1Yau Mathematical Sciences Center, Tsinghua University 2Institute for Interdisciplinary Information Science, Tsinghua University 3Hisilicon 4Department of Mathemathcal Sciences, Tsinghua University 5Yanqi Lake Beijing Institute of Mathematical Sciences and Applications {zhengdh19,csf19}@mails.tsinghua.edu.cn, zhangxiaowen9@hisilicon.com, {zqshi,kaisheng,clbao}@mail.tsinghua.edu.cn
Pseudocode Yes Algorithm 1 The Denoising Algorithm NN+X. Input: Noisy image y, ρ, σ, η; Output: Denoised image x;
Open Source Code No The paper does not provide any concrete access to source code (no specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described.
Open Datasets Yes We choose two nature real-world noisy image datasets named as CC (Nam et al., 2016), Poly U (Xu et al., 2018a), and one real fluorescence microscopy dataset named FMDD (Zhang et al., 2019) for testing the performance of our method...
Dataset Splits No The paper uses standard datasets (e.g., CC, Poly U, FMDD, DND, SIDD, Set9, BSD68, CBSD68) but does not provide explicit details on how the training, validation, and test splits were performed for its own experiments (e.g., percentages, sample counts, or specific citations for their chosen splits if they deviate from standard).
Hardware Specification Yes Both encoder network G and decoder network F are chosen as two standard 10 layers U-Nets (Ronneberger et al., 2015) implemented in Pytorch using Nvidia 1080TI or Nvidia 2080TI GPUs.
Software Dependencies No The paper mentions 'Pytorch' for implementation and 'ADAM algorithm' for optimization but does not provide specific version numbers for these software components.
Experiment Setup Yes ADAM algorithm (Kingma & Ba, 2014) is adopted to optimize the network parameters and the learning rate is set as 0.01. The number of epoch in network training is set as 500 and the parameters ρ, σ and η in ADMM are set as 1, 5 and 0.5 respectively.