Unsupervised Image Denoising with Score Function

Authors: Yutong Xie, Mingze Yuan, Bin Dong, Quanzheng Li

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiment, Table 2: Quantitative comparison for various parameters of Σ in additive Gaussian noise using different methods in terms of PNSR (d B)/SSIM.
Researcher Affiliation Academia Yutong Xie Peking University, Mingze Yuan Peking University, Bin Dong Peking University, Quanzheng Li Massachusetts General Hospital and Harvard Medical School
Pseudocode Yes Algorithm 1 The general denoising process, Algorithm 2 An iterative trick to solve x = Σ (x) s (y) + y, Algorithm 3 An iterative method to solve Eq. 7 in the case of Rayleigh noise, Algorithm 4 The general framework to solve Eq. 7 for correlated multiplicative noise model, Algorithm 5 The full denoising process for mixture noise y = z + ϵ
Open Source Code No The paper mentions implementation details but does not provide an explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes DIV2K [22] and CBSD500 dataset [3] are used as training datasets. and Kodak dataset, CBSD68 [15] and CSet9.
Dataset Splits No The paper mentions using DIV2K and CBSD500 for training and Kodak, CBSD68, CSet9 for evaluation, but it does not specify explicit train/validation/test split percentages, sample counts, or a detailed splitting methodology for these datasets, nor does it explicitly mention a validation set split.
Hardware Specification Yes All the models are implemented in Py Torch [18] with NVidia V100.
Software Dependencies No The paper mentions 'Py Torch [18]' but does not provide a specific version number for it or other software dependencies.
Experiment Setup Yes When training, we randomly clip the training images to patches with the resolution of 128 128. Adam W optimizer [14] is used to train the network. We train each model for 5000 steps with the batch size of 32. The learning rate is initialized to 1 10 4 for first 4000 steps and it is decreased to 1 10 5 for final 1000 steps. When an iterative algorithm is needed to solve Eq. 7, we set the number of iterations as 10.