Diffusion Posterior Sampling for General Noisy Inverse Problems

Authors: Hyungjin Chung, Jeongsol Kim, Michael Thompson Mccann, Marc Louis Klasky, Jong Chul Ye

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method demonstrates that diffusion models can incorporate various measurement noise statistics such as Gaussian and Poisson, and also efficiently handle noisy nonlinear inverse problems such as Fourier phase retrieval and non-uniform deblurring. Code is available at https: //github.com/DPS2022/diffusion-posterior-sampling. ... With extensive experiments including various inverse problems inpainting, super-resolution, (Gaussian/motion/non-uniform) deblurring, Fourier phase retrieval we show that our method serves as a general framework for solving general noisy inverse problems with superior quality (Representative results shown in Fig. 1).
Researcher Affiliation Collaboration Hyungjin Chung 1,2, Jeongsol Kim 1, Michael T. Mccann2, Marc L. Klasky2 & Jong Chul Ye1 1KAIST, 2 Los Alamos National Laboratory {hj.chung, jeongsol, jong.ye}@kaist.ac.kr, {mccann, mklasky}@lanl.gov
Pseudocode Yes Algorithm 1 DPS Gaussian ... Algorithm 2 DPS Poisson
Open Source Code Yes Code is available at https: //github.com/DPS2022/diffusion-posterior-sampling. ... Code availability. Code is available at https://github.com/DPS2022/ diffusion-posterior-sampling.
Open Datasets Yes We test our experiment on two datasets that have diverging characteristic FFHQ 256 256 (Karras et al., 2019), and Imagenet 256 256 (Deng et al., 2009), on 1k validation images each. ... The diffusion model for FFHQ was trained from scratch using 49k training data (to exclude 1k validation set) for 1M steps.
Dataset Splits Yes We test our experiment on two datasets that have diverging characteristic FFHQ 256 256 (Karras et al., 2019), and Imagenet 256 256 (Deng et al., 2009), on 1k validation images each. ... The diffusion model for FFHQ was trained from scratch using 49k training data (to exclude 1k validation set) for 1M steps.
Hardware Specification Yes All experiments were performed on a single RTX 2080Ti GPU.
Software Dependencies No The paper mentions using Python, PyTorch, CUDA, and the scico library, but does not provide specific version numbers for these software components.
Experiment Setup Yes Forward measurement operators are specified as follows: (i) For box-type inpainting, we mask out 128 128 box region following Chung et al. (2022a), and for random-type we mask out 92% of the total pixels (all RGB channels). (ii) For super-resolution, bicubic downsampling is performed. (iii) Gaussian blur kernel has size 61 61 with standard deviation of 3.0, and motion blur is randomly generated with the code6, with size 61 61 and intensity value 0.5. ... All Gaussian noise is added to the measurement domain with σ = 0.05. Poisson noise level is set to λ = 1.0. ... In the discrete implementation, we instead use ζi to express the step size. From the experiments, we observe that taking ζi = ζ / y A(ˆx0(xi)) , with ζ set to constant, yields highly stable results. See Appendix D for details in the choice of step size. ... Section D.1 Implementation Details: Step size.