DreamClean: Restoring Clean Image Using Deep Diffusion Prior
Authors: Jie Xiao, Ruili Feng, Han Zhang, Zhiheng Liu, Zhantao Yang, Yurui Zhu, Xueyang Fu, Kai Zhu, Yu Liu, Zheng-Jun Zha
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Dream Clean relies on elegant theoretical supports to assure its convergence to clean image when VPS has appropriate parameters, and also enjoys superior experimental performance over various challenging tasks that could be overwhelming for previous methods when degradation prior is unavailable. Our experiments consist of: i) verifying that Dream Clean optimizes latents to higher probability region (Section 3.2); ii) quantitative comparison with previous methods (Sections 3.3 and 3.4); iii) presentation of visual results across multiple degradation types to demonstrate its strong robustness and generality (Section 3.4); iv) exploiting the degradation model (Section 3.5); We demonstrate that Dream Clean is orthogonal to prior works, which can exploit the underlying degradation model to achieve challenging inverse problem (e.g., phase retrieval); v) ablation study on the different schedules of ηl and ηg. |
| Researcher Affiliation | Collaboration | Jie Xiao1 Ruili Feng2 Han Zhang3 Zhiheng Liu1 Zhantao Yang3 Yurui Zhu1 Xueyang Fu1 Kai Zhu2 Yu Liu2 Zheng-Jun Zha1 1University of Science and Technology of China 2Alibaba Group 3Shanghai Jiao Tong University |
| Pseudocode | Yes | We present DDIM inversion in Algorithm A1, Variance Preservation Sampling algorithm in Algorithm A2 and Dream Clean algorithm in Algorithm A3. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that the source code for the Dream Clean methodology is publicly available. |
| Open Datasets | Yes | We validate the efficacy of Dream Clean using the diffusion models (Ho et al., 2020; Dhariwal & Nichol, 2021) trained on Celeb A (Karras et al., 2018), LSUN bedroom (Yu et al., 2015) and Image Net (Deng et al., 2009). |
| Dataset Splits | Yes | We use Image Net 1K (Deng et al., 2009), Celeb A 1K (Karras et al., 2018), and validation set of LSUN bedroom (Yu et al., 2015) with image size 256 256 for validation. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory specifications, or cloud computing resources used for its experiments. |
| Software Dependencies | No | The paper refers to using 'diffusion models' and 'Stable Diffusion XL' but does not specify software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA version) needed for replication. |
| Experiment Setup | Yes | We set DDIM inference steps to 100, the inverse strength to 300, γ to 0.05, and M to 1. Hence, our method requires 90 NFEs when the degradation model is unknown (30 for DDIM inverse, 30 for DDIM, and 30 for VPS). For JPEG artifacts correction, we simulate the real world scenario by multiple non-aligned compression. Specifically, we used cascaded JPEG compression with QF = (10, 20, 40) whose 8 × 8 blocks are shifted by (0, 3, 6) pixels respectively. |