BayesDiff: Estimating Pixel-wise Uncertainty in Diffusion via Bayesian Inference
Authors: Siqi Kou, Lei Gan, Dequan Wang, Chongxuan Li, Zhijie Deng
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the efficacy of Bayes Diff and its promise for practical applications. Our code is available at https://github.com/karrykkk/Bayes Diff. 1 INTRODUCTION The ability of diffusion models to gradually denoise noise vectors into natural images has paved the way for numerous applications, including image synthesis (Dhariwal & Nichol, 2021; Rombach et al., 2022), image inpainting (Lugmayr et al., 2022), text-to-image generation (Saharia et al., 2022; Gu et al., 2022; Zhang et al., 2023), etc. |
| Researcher Affiliation | Academia | Siqi Kou1, Lei Gan4, Dequan Wang1,5, Chongxuan Li2,3 , and Zhijie Deng1 1Qing Yuan Research Institute, SEIEE, Shanghai Jiao Tong University 2Gaoling School of Artificial Intelligence, Renmin University of China 3Beijing Key Laboratory of Big Data Management and Analysis Methods 4School of Computer Science, Fudan University 5Shanghai Artificial Intelligence Laboratory |
| Pseudocode | Yes | Algorithm 1 Pixel-wise uncertainty estimation via Bayesian inference. (Bayes Diff) ... Algorithm 2 A faster variant of Bayes Diff. (Bayes Diff-Skip) |
| Open Source Code | Yes | Our code is available at https://github.com/karrykkk/Bayes Diff. |
| Open Datasets | Yes | We first conduct experiments on the U-Vi T (Bao et al., 2023) model trained on Image Net (Deng et al., 2009) and Stable Diffusion... We use Bayes Diff-Skip to generate 100,000 images on CELEBA (Liu et al., 2015) based on DDPM (Ho et al., 2020) model |
| Dataset Splits | No | The paper does not provide specific details on training, validation, or test dataset splits (e.g., percentages or exact counts) for the datasets used. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) required for replication. |
| Experiment Setup | Yes | Unless specified otherwise, we set the Monte Carlo sample size S to 10 and adopt Bayes Diff-Skip with a skipping interval of 4, which makes our sampling and uncertainty quantification procedure consume no more than 2 time than the vanilla sampling method. The sampling algorithm follows the 2-order DPM-Solver with 50 function evaluations (NFE). |