Training deep learning based denoisers without ground truth data
Authors: Shakarim Soltanayev, Se Young Chun
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, denoising simulation results are presented with the MNIST dataset using a simple stacked denoising autoencoder (SDA), and a large-scale natural image dataset using a deep convolutional neural network (CNN) image denoiser (Dn CNN). |
| Researcher Affiliation | Academia | Shakarim Soltanayev Se Young Chun Department of Electrical Engineering Ulsan National Institute of Science and Technology (UNIST), Republic of Korea {shakarim,sychun}@unist.ac.kr |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/Shakarim94/Net-SURE. |
| Open Datasets | Yes | We performed denoising simulations with the MNIST dataset. The network was trained with 400 images with matrix sizes of 180 180 pixels. Two test sets were used to evaluate performance: one set consisted of 12 widely used images (Set12) [17], and the other was a BSD68 dataset. |
| Dataset Splits | Yes | SDA was trained to output a denoised image using a set of 55,000 training and 5,000 validation images. |
| Hardware Specification | Yes | With the use of an NVidia Titan X GPU, the training process took approximately 7 hours for Dn CNNMSE-GT and approximately 11 hours for Dn CNN-SURE. |
| Software Dependencies | No | The paper mentions 'Tensor Flow [34]' as a deep learning development framework but does not specify a version number. |
| Experiment Setup | Yes | For all cases, SDA was trained with the Adam optimization algorithm [33] with the learning rate of 0.001 for 100 epochs. The batch size was set to 200 (bigger batch sizes did not improve the performance). The ϵ value in (6) was set to 0.0001. |