Fully Unsupervised Diversity Denoising with Convolutional Variational Autoencoders
Authors: Mangal Prakash, Alexander Krull, Florian Jug
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We quantitatively evaluated the performance of DIVNOISING on 13 publicly available datasets (see Appendices A.1 and A.2 for data details), 9 of which are subject to high levels of intrinsic (real world) noise. To 4 others we synthetically added noise, hence giving us full knowledge about the nature of the added noise. In Table 1, we report denoising performance of all experiments we conducted in terms of peak signal-to-noise ratio (PSNR) with respect to available ground truth images. |
| Researcher Affiliation | Academia | Mangal Prakash Center for Systems Biology Dresden Max-Planck Institute (CBG) Dresden, Germany prakash@mpi-cbg.de Alexander Krull : School of Computer Science University of Birmingham Birmingham, UK a.f.f.krull@bham.ac.uk Florian Jug: Center for Systems Biology Dresden Max-Planck Institute (CBG) Dresden, Germany Fondazione Human Technopole, Milano, Italy jug@mpi-cbg.de, florian.jug@fht.org |
| Pseudocode | No | No pseudocode or clearly labeled algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide a direct link to the source code for the DIVNOISING methodology, nor does it explicitly state that the code is being released. |
| Open Datasets | Yes | We use public microscopy datasets which show realistic levels of noise, introduced by the respective optical imaging setups. The FU-PN2V Convallaria (Krull et al., 2020; Prakash et al., 2020) data... The FU-PN2V Mouse nuclei (Prakash et al., 2020) data... The FU-PN2V Mouse actin (Prakash et al., 2020) data... Finally, we use all 3 channels of 2 noise levels (avg1 and avg16) of the W2S (Zhou et al., 2020) data. We use the well known MNIST (Le Cun et al., 1998) as well as the KMNIST (Clanuwat et al., 2018) dataset... The Denoi Seg Mouse (Buchholz et al., 2020) data... The Denoi Seg Flywing (Buchholz et al., 2020) data... Lastly, we randomly select 500 images of size 384 ˆ 286 from Bio ID Face recognition database (noa)... |
| Dataset Splits | Yes | For all experiments on intrinsically noisy microscopy data, validation and test set splits follow the ones described in the respective publication. For all datasets other than MNIST and KMNIST, we extract training patches of size 128 ˆ 128, and separate 15% of all patches for validation. |
| Hardware Specification | Yes | Our depth 2 networks trained for all experiments requires about 1.8 GB GPU memory and our depth 3 networks roughly 5 GB GPU memory on a NVIDIA TITAN Xp GPU. |
| Software Dependencies | No | The paper mentions software like ADAM optimizer and U-NET architecture, but does not provide specific version numbers for any software dependencies (e.g., Python, TensorFlow, PyTorch versions). |
| Experiment Setup | Yes | Training is performed using the ADAM (Kingma & Ba, 2015) optimizer for 200 epochs with 10 steps per epoch with a batch size of 4 and a virtual batch size of 20 for N2V and CARE and a batch size of 1 and a virtual batch size of 20 for PN2V, an initial learning rate of 0.001, and the same basic learning rate scheduler as in (Krull et al., 2020). All networks are trained with a batch size of 32 and an initial learning rate of 0.001. The learning rate is multiplied by 0.5 if the validation loss does not decrease for 30 epochs. |