Taming Generative Diffusion Prior for Universal Blind Image Restoration

Authors: Siwei Tu, Weidong Yang, Ben Fei

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, Our BIR-D has demonstrated superior practicality and versatility than off-the-shelf unsupervised methods across various tasks both on real-world and synthetic datasets, qualitatively and quantitatively. In this section, we systematically compare BIR-D with other blind image restoration methods in real-world and synthetic datasets.
Researcher Affiliation Academia 1Fudan University, 2Chinese University of Hong Kong
Pseudocode Yes Algorithm 1: Unconditional diffusion model with the guidance of degraded image y, given a diffusion model noise prediction function ϵθ(xt, t).
Open Source Code Yes The code is available at https://github.com/Tusiwei/BIR-D
Open Datasets Yes We evaluate the blind image restoration capability of BIR-D on two real-world datasets, namely LFW dataset [14] and WIDER dataset [15]. We conducted experiments on linear inverse problems on Image Net 1k to compare BIR-D with off-the-shelf methods.
Dataset Splits No Explanation: The paper discusses datasets used for testing and mentions "training and test details" in the NeurIPS checklist, but it does not provide explicit information on validation dataset splits (e.g., specific percentages or counts for training, validation, and test sets, or a defined cross-validation strategy).
Hardware Specification No Explanation: While the NeurIPS checklist indicates that hardware information is provided in the appendix and code ("Justification: The detailed information about the experiment in the paper is introduced in the appendix and code."), the main paper or its appendices do not explicitly list specific hardware details such as GPU models (e.g., NVIDIA A100), CPU models, or memory specifications used for running the experiments.
Software Dependencies No Explanation: The paper does not provide specific version numbers for key software components or libraries (e.g., "Python 3.8", "PyTorch 1.9") that are explicitly stated in the paper text to be necessary for reproducing the experiments.
Experiment Setup Yes The experimental section of the paper provides information such as the dataset used and the size of the convolution kernel. The complete detailed information is provided in the appendix and code. (from NeurIPS checklist Q6) For blind image restoration tasks, the experiment showed that the results of a 5 5 convolution kernel perform best. For linear inverse tasks (Table 8), the optimal convolution kernel size was 9 9.