Denoising MCMC for Accelerating Diffusion-Based Generative Models
Authors: Beomsu Kim, Jong Chul Ye
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare various integrators with and without DLG on CIFAR10, Celeb A-HQ-256, and FFHQ-1024 image generation. ... On CIFAR10 in the limited number of score function evaluation (NFE) setting, we obtain 3.25 FID with 10 NFE and 2.49 FID with 16 NFE. On Celeb A-HQ256, we have 6.99 FID with 160 NFE, which is currently the best result with score-based models. |
| Researcher Affiliation | Academia | Beomsu Kim 1 Jong Chul Ye 1 ... 1Kim Jaechul Graduate School of AI, KAIST, Daejeon, Korea. Correspondence to: Jong Chul Ye <jong.ye@kaist.ac.kr>. |
| Pseudocode | Yes | Appendix C. Pseudocodes |
| Open Source Code | Yes | Code: https://github.com/1202kbs/DMCMC |
| Open Datasets | Yes | Datasets. We use the CIFAR10 dataset (Krizhevsky, 2009), Celeb A-HQ-256 dataset (Karras et al., 2018), and FFHQ-1024 dataset (Karras et al., 2019). |
| Dataset Splits | No | The paper mentions generating samples for evaluation (e.g., 'For CIFAR10, we generate 50k samples') and using pre-trained models, but does not provide specific train/validation/test dataset splits for reproducibility. |
| Hardware Specification | Yes | Device. We use an RTX 2080 Ti or two Quadro RTX 6000 depending on the required VRAM. |
| Software Dependencies | No | The paper mentions software like 'pytorch-fid' and 'Adam optimizer' but does not provide specific version numbers for these or other key software dependencies. |
| Experiment Setup | Yes | Appendix A. Detailed Experiment Settings... For DLG, the chain was initialized by generating samples with the deterministic integrator of Karras et al. (2022) using 37 NFE, adding Gaussian noise of variance 0.25, and running 20 iterations of Langevin-Gibbs. Table 3 lists the hyper-parameters for DLG and the corresponding FID (Figure 3). |