sRGB Real Noise Modeling via Noise-Aware Sampling with Normalizing Flows
Authors: Dongjin Kim, Donggoo Jung, Sungyong Baik, Tae Hyun Kim
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experimental results, our model demonstrates exceptional noise quality and leads in denoising performance on benchmark datasets. To evaluate the capabilities of our noise generation model, we conduct two main experiments. First, we assess the quality of the generated noise quantitatively and qualitatively and investigate noise correlation through visualization (Sec 5.2, Sec 5.3, and Sec 5.4). Second, we examine the effectiveness of the generated noise in real-world denoising tasks by training a denoising network using the generated noisy images (Sec 5.5). Finally, we conduct ablation studies to show superiority of our proposed NAFlow, NAS, and multi-scale noise embedding (Sec 5.6). |
| Researcher Affiliation | Academia | Dongjin Kim 1, Donggoo Jung 2 , Sungyong Baik3, Tae Hyun Kim 1 Dept. of Computer Science1, Dept. of Artificial Intelligence2, Dept. of Data Science3 Hanyang University {dongjinkim, dgjung, dsybaik, taehyunkim}@hanyang.ac.kr |
| Pseudocode | Yes | Algorithm 1 Noise-Aware Sampling (NAS) Input: clean condition x, noisy input y 1: z = fθ(y; x) 2: Sample z N( µ, Σ), where µ = P N(z;µc,Σc) P c N(z;µc ,Σc ) µc, Σ = P N(z;µc,Σc) P c N(z;µc ,Σc ) Σc 3: return y = f 1 θ ( z; x) |
| Open Source Code | No | The paper provides links to official GitHub repositories for third-party baseline models (C2N and Ne CA) but does not provide a link or explicit statement about the availability of their own source code for the NAFlow framework. |
| Open Datasets | Yes | To train NAFlow, we use SIDD (Abdelhamed et al., 2018) dataset, which has 34 different camera configurations (i.e., C = 34). Specifically, we use SIDD-Medium split which comprises 320 noisy-clean image pairs captured with five different smartphone cameras: Google Pixel (GP), i Phone 7 (IP), Samsung Galaxy S6 Edge (S6), Motorola Nexus 6 (N6), and LG G4 (G4). |
| Dataset Splits | Yes | To evaluate the noise generation, we use SIDD-validation. The training of the denoiser in Sec. 5.5 also uses the same dataset, but instead of real noisy data, it uses noisy data generated by each noise modeling method. Following the usual settings as in (Kousha et al., 2022; Fu et al., 2023), to ensure consistency between the training and validation sets, we ensure that both sets contain the same ISO levels. |
| Hardware Specification | No | The paper states 'All the networks are optimized using Adam optimizer' and 'Training is conducted with the identical experimental configuration as described in Fu et al. (2023)' but does not provide specific details on the hardware (e.g., GPU models, CPU types, or cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer (Kingma & Ba, 2014)' and 'Dn CNN (Zhang et al., 2017)', but it does not specify software versions for programming languages, libraries, or frameworks (e.g., Python version, PyTorch/TensorFlow version, CUDA version). |
| Experiment Setup | Yes | For training NAFlow, we minimize the LNLL loss in Eq. 6 with initial learning rate 1e4 which is reduced by half at 50k, 75k, 90k during 100k iterations. We use randomly cropped patches (160 160) and the mini-batch size of 8 for training. For a denoising network, we use the Dn CNN (Zhang et al., 2017). Training is conducted with the identical experimental configuration as described in Fu et al. (2023); with 300 epochs, a learning rate 10 3, and a batch size of 8. |