GAN2GAN: Generative Noise Learning for Blind Denoising with Single Noisy Images

Authors: Sungmin Cha, Taeeon Park, Byeongjoon Kim, Jongduk Baek, Taesup Moon

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In results, we show the denoiser trained with our GAN2GAN achieves an impressive denoising performance on both synthetic and real-world datasets for the blind denoising setting; it almost approaches the performance of the standard discriminatively-trained or N2N-trained models that have more information than ours, and it significantly outperforms the recent baseline for the same setting, e.g., Noise2Void, and a more conventional yet strong one, BM3D. The official code of our method is available at https://github.com/csm9493/GAN2GAN. Figure 1 shows the denoising results on BSD68 (Roth & Black, 2009) for Gaussian noise with σ = 25. The blue line is the PSNR of the N2N model trained with noisy observation pairs of the clean images in the BSD training set, serving as an upper bound. 5 EXPERIMENTAL RESULTS
Researcher Affiliation Academia Sungmin Cha1, Taeeon Park1, Byeongjoon Kim2, Jongduk Baek2 and Taesup Moon3 Sungkyunkwan University1, Yonsei University2, Seoul National University3, South Korea
Pseudocode Yes Algorithm 1 Training a generative model, all experiments in this paper used the defaults values, ncritic = 5, nepoch = 30, m = 64, αg = 4e 4, αcritic = 5e 5, α = 5, β = 1, γ = 10
Open Source Code Yes The official code of our method is available at https://github.com/csm9493/GAN2GAN.
Open Datasets Yes In synthetic noise experiments, we always used the noisy training images from BSD400 (Martin et al., 2001). For evaluation, we used the standard BSD68 (Roth & Black, 2009) as a test set. For real-noise experiment, we experimented on two data sets: the WF set in the microscopy image datasets in (Zhang et al., 2019) and the reconstructed CT dataset.
Dataset Splits Yes In synthetic noise experiments, we always used the noisy training images from BSD400 (Martin et al., 2001). For evaluation, we used the standard BSD68 (Roth & Black, 2009) as a test set.
Hardware Specification No Moreover, the inference time for BM3D is about 4.5 5.0 seconds per image since a noise estimation has to be done for each image separately, whereas that for G2Gj is only 4 ms (on GPU), which is another significant advantage of our method. The paper mentions running on a 'GPU' but does not specify any particular model or hardware details.
Software Dependencies No We put full details on training, model architectures and hyperparameters as well as the software platforms in the S.M. The paper refers to supplementary material for software platforms but does not provide specific software names with version numbers in the main text.
Experiment Setup Yes For the generative model training, the patch size used for D and N was 96 96, and n and N were set to 20, 000 (BSD) and 40, 000 (microscopy), respectively. For the iterative G2G training, the patch size for D was 120 120 and n = 20, 500, and in every mini-batch, we generated new noisy pairs with gθ1 as in the noise augmentation of (Zhang et al., 2017). The architecture of G2Gj was set to 17-layer Dn CNN in (Zhang et al., 2017). Algorithm 1 Training a generative model, all experiments in this paper used the defaults values, ncritic = 5, nepoch = 30, m = 64, αg = 4e 4, αcritic = 5e 5, α = 5, β = 1, γ = 10