SS-BSN: Attentive Blind-Spot Network for Self-Supervised Denoising with Nonlocal Self-Similarity

Authors: Young-Joo Han, Ha-Jin Yu

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct the experiments on real-world image denoising tasks. The proposed method quantitatively and qualitatively outperforms state-of-the-art methods in selfsupervised denoising on the Smartphone Image Denoising Dataset (SIDD) and Darmstadt Noise Dataset (DND) benchmark datasets.
Researcher Affiliation Collaboration Young-Joo Han1,2 and Ha-Jin Yu 1 1School of Computer Science, University of Seoul 2Advanced Technology R&D Center, Vieworks
Pseudocode No The paper describes methods and architectures using textual descriptions and diagrams (e.g., Figure 2, 3, 5), but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at: https://github.com/YoungJooHan/SS-BSN
Open Datasets Yes To evaluate the proposed method, we use the Smartphone Image Denoising Dataset (SIDD) [Abdelhamed et al., 2018] and Darmstadt Noise Dataset (DND) [Plotz and Roth, 2017]. The ground truth images of the SIDD benchmark and DND dataset are not provided, but the peak signalto-noise ratio (PSNR) and structural similarity index measure (SSIM) results for the denoising results can be obtained through the online submission system on the SIDD benchmark website1 and the DND benchmark website2. 1https://www.eecs.yorku.ca/ kamel/sidd/benchmark.php 2https://noise.visinf.tu-darmstadt.de/benchmark/
Dataset Splits Yes The SIDD medium split contains 320 noisy-clean image pairs... We also adopted the SIDD validation and benchmark dataset for validation and testing. The SIDD validation and benchmark dataset contain 1,280 patches of size 256 256, each. The DND dataset does not provide training and validation images; therefore, we used the DND dataset for training and performance evaluation.
Hardware Specification No The paper does not mention any specific hardware components (e.g., GPU model, CPU type, memory size) used for running the experiments.
Software Dependencies No The paper mentions using 'L1 loss' and 'Adam [Kingma and Ba, 2015] optimizer' but does not specify software versions for libraries like PyTorch or TensorFlow, nor Python or CUDA versions.
Experiment Setup Yes To optimize the SS-BSN3, we randomly extract the patches of size 120 120 from noisy images and augment all training images by randomly flipping and rotating them by 90 . In addition, we used the L1 loss and the Adam [Kingma and Ba, 2015] optimizer with an initial learning rate of 10 4. At the 16th epoch, the learning rate is multiplied by 0.1, where our model is trained over 20 epochs. We set γ to 2 for the SS-Attention module and m to 3 for the SS-BSN architecture.