A Unified Conditional Framework for Diffusion-based Image Restoration
Authors: Yi Zhang, Xiaoyu Shi, Dasong Li, Xiaogang Wang, Jian Wang, Hongsheng Li
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our conditional framework on three challenging tasks: extreme low-light denoising, deblurring, and JPEG restoration, demonstrating its significant improvements in perceptual quality and the generalization to restoration tasks. |
| Researcher Affiliation | Collaboration | 1 CUHK MMLab 2 Snap Research 3 Centre for Perceptual and Interactive Intelligence 4 Shanghai AI Laboratory |
| Pseudocode | No | The paper describes its method in text and diagrams (Figure 1, Figure 2) but does not provide a formal pseudocode or algorithm block. |
| Open Source Code | Yes | https://zhangyi-3.github.io/project/UCDIR |
| Open Datasets | Yes | For extreme low-light denoising, we use the SID Sony dataset [7]... For deblurring, we follow Dv SR [50] to train and test on the Go Pro dataset. For JPEG restoration, we train on the Image Net and follow DDRM [26] to test on selected 1K evaluation images [37]. |
| Dataset Splits | No | The paper mentions training parameters and using a test set, but does not explicitly describe validation dataset splits, percentages, or counts. |
| Hardware Specification | Yes | The training process takes approximately three days to complete when utilizing 8 A100 GPUs. |
| Software Dependencies | No | The paper mentions optimizers (Adam W) and activation functions (Swish) but does not provide specific version numbers for software dependencies or libraries used for implementation. |
| Experiment Setup | Yes | We used the Adam W optimizer with a learning rate of 1 10 4, and EMA decay rate is 0.9999. In the training, we used the diffusion process with T = 2000 steps with the continuous noise level [10]. During the testing, the inference step is reduced to 50 with uniform interpolation. ...We train each task for 500k iterations with batch size 32. |