Denoising Diffusion Restoration Models
Authors: Bahjat Kawar, Michael Elad, Stefano Ermon, Jiaming Song
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate DDRM s versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization under various amounts of measurement noise. DDRM outperforms the current leading unsupervised methods on the diverse Image Net dataset in reconstruction quality, perceptual quality, and runtime, being 5 faster than the nearest competitor. |
| Researcher Affiliation | Collaboration | Bahjat Kawar Department of Computer Science Technion, Haifa, Israel bahjat.kawar@cs.technion.ac.il Michael Elad Department of Computer Science Technion, Haifa, Israel elad@cs.technion.ac.il Stefano Ermon Department of Computer Science Stanford, California, USA ermon@cs.stanford.edu Jiaming Song NVIDIA Santa Clara, California, USA jiamings@nvidia.com |
| Pseudocode | No | The paper does not include pseudocode or a clearly labeled algorithm block. |
| Open Source Code | Yes | Our code is available at https://github.com/bahjat-kawar/ddrm. |
| Open Datasets | Yes | We demonstrate our algorithm s capabilities using the diffusion models from [19], which are trained on Celeb A-HQ [23], LSUN bedrooms, and LSUN cats [56] (all 256 256 pixels). ... In addition, we use the models from [13], trained on the training set of Image Net 256 256 and 512 512, and tested on the corresponding validation set. |
| Dataset Splits | Yes | We evaluate all methods on the problems of 4 super-resolution and deblurring, on one validation set image from each of the 1000 Image Net classes, following [38]. ... In addition, we use the models from [13], trained on the training set of Image Net 256 256 and 512 512, and tested on the corresponding validation set. |
| Hardware Specification | No | The paper mentions support from "Amazon AWS" and "Google Cloud" but does not provide specific hardware details such as GPU models, CPU types, or memory configurations used for experiments. |
| Software Dependencies | Yes | Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] In both the paper and the appendices. (Implicitly, this includes software dependencies often found in training details or supplementary information mentioned to be in the appendix, as per common practice where such information is provided) |
| Experiment Setup | Yes | In all experiments, we use η = 0.85, ηb = 1, and a uniformly-spaced timestep schedule based on the 1000-step pre-trained models (more details in Appendix E). The number of NFEs (timesteps) is reported in each experiment. |