Noise2Self: Blind Denoising by Self-Supervision

Authors: Joshua Batson, Loic Royer

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate this on natural image and microscopy data, where we exploit noise independence between pixels, and on single-cell gene expression data, where we exploit independence between detections of individual molecules. In Table 1, we compare all three J -invariant denoisers on a single image. As shown in Table 2, both neural nets trained with selfsupervision (Noise2Self) achieve superior performance to the classic unsupervised denoisers NLM and BM3D (at default parameter values), and comparable performance to the same neural net architectures trained with clean targets (Noise2Truth) and with independently noisy targets (Noise2Noise).
Researcher Affiliation Academia 1Chan-Zuckerberg Biohub. Correspondence to: Joshua Batson <joshua.batson@czbiohub.org>, Loic Royer <loic.royer@czbiohub.org>.
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Sample code is available on Git Hub1 https://github.com/czbiohub/noise2self
Open Datasets Yes We demonstrate this on natural image and microscopy data... and on single-cell gene expression data. For the datasets H anz ı and Image Net we use a mixture of Poisson, Gaussian, and Bernoulli noise. For the Cell Net microscopy dataset we simulate realistic s CMOS camera noise. In Figure 5, we demonstrate this phenomenon for an alphabet consisting of 30 16x16 handwritten digits drawn from MNIST (Le Cun et al., 1998).
Dataset Splits No The paper mentions 'splitting the samples into train and validation sets' for principal component regression and 'cross-validation', but does not provide specific percentages or absolute counts for these splits for the main deep learning experiments.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions PyTorch as a reference (Paszke et al., 2017) but does not provide specific version numbers for the software dependencies used in their experiments.
Experiment Setup Yes In Figure 2... and selects the optimal radius r = 3. We use a random partition of 25 subsets for J , and we make the neural net J -invariant as in Eq. 3, except we replace the masked pixels with random values instead of local averages. To accelerate training, we only compute the net outputs and loss for one partition J J per minibatch. The Dn CNN architecture, with 560,000 parameters, trained with self-supervision on the noisy camera image from 3, with 260,000 pixels, achieves a PSNR of 31.2.