Diffusion Priors for Variational Likelihood Estimation and Image Denoising
Authors: Jun Cheng, Shan Tan
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments and analyses on diverse real-world datasets demonstrate the effectiveness of our method. |
| Researcher Affiliation | Academia | Jun Cheng, Shan Tan School of Artificial Intelligence and Automation, Huazhong University of Science and Technology jcheng24@hust.edu.cn, shantan@hust.edu.cn |
| Pseudocode | Yes | Algorithm 1 Difusion priors-based variational image denoising |
| Open Source Code | Yes | Code is available at https://github.com/HUST-Tan/Diffusion VI. |
| Open Datasets | Yes | We consider several real-world denoising datasets to evaluate our method, including SIDD [1], Poly U [47], CC [30], and FMDD [51]. |
| Dataset Splits | No | The paper mentions 'SIDD validation' and dataset sizes, but does not specify explicit training/validation/test splits (e.g., percentages or sample counts) for all datasets used. |
| Hardware Specification | Yes | All experiments are conducted on Nvidia 2080Ti GPU. |
| Software Dependencies | No | The paper mentions using a pre-trained diffusion model but does not specify versions for core software libraries or dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | The total diffusion steps are 1000 by default, i.e., t [1, , 1000]. We choose α = 1 and Gaussian kernel size l = 9. The hyperparameters β and s for different datasets are summarized in Table 1. Different α/β represent the rough estimation of the prior precision for noises in different datasets, and Gaussian kernel scale s controls the range of local spatial correlation. The temperature γ is set to 1/5 for all datasets and will be ablated in the sequel. |