Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model
Authors: Yinhuai Wang, Jiwen Yu, Jian Zhang
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on several IR tasks reveal that DDNM outperforms other state-of-the-art zero-shot IR methods. We also demonstrate that DDNM+ can solve complex real-world applications, e.g., old photo restoration. and 4 EXPERIMENTS Our experiments consist of three parts. Firstly, we evaluate the performance of DDNM on five typical IR tasks and compare it with state-of-the-art zero-shot IR methods. Secondly, we experiment DDNM+ on three typical IR tasks to verify its improvements against DDNM. Thirdly, we show that DDNM and DDNM+ perform well on challenging real-world applications. |
| Researcher Affiliation | Academia | 1Peking University Shenzhen Graduate School, 2Peng Cheng Laboratory |
| Pseudocode | Yes | Algorithm 1 Sampling of DDNM and Algorithm 2 Sampling of DDNM+ |
| Open Source Code | Yes | Code is available at https://github.com/wyhuai/DDNM. |
| Open Datasets | Yes | We choose Image Net 1K and Celeb A-HQ 1K datasets with image size 256 256 for validation. |
| Dataset Splits | No | The paper uses pre-trained denoising networks and mentions using ImageNet 1K and Celeb A-HQ 1K for validation, but it does not specify the train/validation/test splits (percentages or counts) that were used for their specific experiments, nor does it provide citations to such predefined splits in a way that allows direct reproduction of the data partitioning. |
| Hardware Specification | Yes | on a single 2080Ti GPU with batch size 1 |
| Software Dependencies | No | The paper provides 'Pytorch-like codes' in Appendix E but does not specify version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We use DDIM as the base sampling strategy with η = 0.85, 100 steps, without classifier guidance, for all diffusion-based methods. and For fair comparison, we set T = 250, l = s = 20, r = 3 for DDNM+ while set T = 1000 for DDNM so that the total sampling steps and computational consumptions are roughly equal. |