PUCA: Patch-Unshuffle and Channel Attention for Enhanced Self-Supervised Image Denoising
Authors: Hyemi Jang, Junsung Park, Dahuin Jung, Jaihyun Lew, Ho Bae, Sungroh Yoon
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that PUCA achieves state-of-the-art performance, outperforming existing methods in self-supervised image denoising. |
| Researcher Affiliation | Academia | Hyemi Jang1, Junsung Park1, Dahuin Jung1, Jaihyun Lew2, Ho Bae3, , Sungroh Yoon1,2, 1Department of Electrical and Computer Engineering, Seoul National University 2Interdisciplinary Program in Artificial Intelligence, Seoul National University 3Department of Cyber Security, Ewha Womans University |
| Pseudocode | No | The paper does not contain any sections explicitly labeled 'Pseudocode' or 'Algorithm', nor does it present any structured, code-like steps for its method. |
| Open Source Code | Yes | Code is available at https://github.com/Hyemi Esme/PUCA |
| Open Datasets | Yes | Smartphone Image Denoising Dataset (SIDD) [1] is a collection of real-world images for denoising captured by five different smartphone cameras. Specifically, the SIDD-Medium dataset consists of 320 pairs of noisy and clean images for training purposes. Darmstadt Noise Dataset (DND) [26] is a dataset used for benchmarking image denoising algorithms. |
| Dataset Splits | Yes | In addition, the SIDD validation set and benchmark set are used for validation and evaluation, respectively. Both sets consist of 1,280 noisy patches with a size of 256 256, and corresponding clean images are provided only for the validation set. |
| Hardware Specification | Yes | We trained the model using an NVIDIA TESLA P100 GPU |
| Software Dependencies | Yes | implemented it with Pytorch 2.0.0. |
| Experiment Setup | Yes | The model was trained with L1 loss between the input noisy image and the output, using Adam optimizer with an initial learning rate of 1e-4. We trained the model for 20 epochs until it fully converged. More detailed information can be found in our supplementary material. |