High-Quality Self-Supervised Deep Image Denoising

Authors: Samuli Laine, Tero Karras, Jaakko Lehtinen, Timo Aila

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we detail the implementation of our denoising scheme in Gaussian, Poisson, and impulse noise. In all our experiments, we use a modified version of the five-level U-Net [23] architecture used by Lehtinen et al. [17], to which we append three 1 × 1 convolution layers. We construct our convolutional blind-spot networks based on this same architecture. Details regarding network architecture, training, and evaluation are provided in the supplement. Our training data comes from the 50k images in the ILSVRC2012 (Imagenet) validation set, and our test datasets are the commonly used KODAK (24 images), BSD300 validation set (100 images), and SET14 (14 images). and Table 1 shows the output image quality for the various methods and ablations tested. Example result images are shown in Figure 2.
Researcher Affiliation Collaboration Samuli Laine NVIDIA Tero Karras NVIDIA Jaakko Lehtinen NVIDIA, Aalto University Timo Aila NVIDIA and {slaine, tkarras, jlehtinen, taila}@nvidia.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks. The methods are described in prose and mathematical expressions.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes Our training data comes from the 50k images in the ILSVRC2012 (Imagenet) validation set, and our test datasets are the commonly used KODAK (24 images), BSD300 validation set (100 images), and SET14 (14 images).
Dataset Splits No The paper states, "Our training data comes from the 50k images in the ILSVRC2012 (Imagenet) validation set, and our test datasets are the commonly used KODAK (24 images), BSD300 validation set (100 images), and SET14 (14 images)." While it mentions the ImageNet validation set for training and other datasets for testing, it does not explicitly provide the specific percentages or sample counts for training, validation, and test splits within these datasets to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running the experiments. It only vaguely mentions "compute infrastructure" in the acknowledgments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, PyTorch 1.9, CUDA 11.1) needed to replicate the experiment.
Experiment Setup No The paper states "Details regarding network architecture, training, and evaluation are provided in the supplement." While some high-level information like "0.5M minibatches" is mentioned, concrete hyperparameter values, optimizer settings, or detailed training configurations are not fully present in the main text.