Pseudoinverse-Guided Diffusion Models for Inverse Problems
Authors: Jiaming Song, Arash Vahdat, Morteza Mardani, Jan Kautz
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method, termed Pseudoinverse-Guided Diffusion Models (ΠGDM), on various inverse problems, such as super-resolution, inpainting, and JPEG restoration over Image Net validation images, and show that it achieves similar performance when compared against state-of-the-art taskspecific diffusion models (Saharia et al., 2021; Dhariwal & Nichol, 2021; Saharia et al., 2022a). |
| Researcher Affiliation | -1 | Anonymous authors Paper under double-blind review |
| Pseudocode | Yes | We list the full algorithm for ΠGDM for VP-SDE in Algorithm 1. ... Listing 1: Pseudocode for computing the pseudoinverse guidance for the noiseless case. |
| Open Source Code | No | The paper refers to publicly available datasets and model checkpoints from 'openai/guided-diffusion' used in their experiments, but does not explicitly state that their own implementation code for ΠGDM is open-source or provide a link to it. |
| Open Datasets | Yes | We evaluate quantitative results on the Image Net dataset (Russakovsky et al., 2015) |
| Dataset Splits | Yes | We report super-resolution results on the full Image Net validation set, and to follow the earlier practice established in Saharia et al. (2022a), we report inpainting and JPEG restoration results on a subset that contains 10k images3. |
| Hardware Specification | No | The paper does not specify the hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Py Torch-like implementation' but does not specify any software dependencies with version numbers (e.g., specific library versions or programming language versions). |
| Experiment Setup | Yes | We use 100 iterations and η = 1.0 for ΠGDM, and include additional task-specific details in App. B. For ΠGDM, we use a class-conditional model, initialize our sampler from pure Gaussian noise at the maximum noise level σT , apply 100 iterations to each image, and set η = 1.0. |