Multi-Step Denoising Scheduled Sampling: Towards Alleviating Exposure Bias for Diffusion Models

Authors: Zhiyao Ren, Yibing Zhan, Liang Ding, Gaoang Wang, Chaoyue Wang, Zhongyi Fan, Dacheng Tao

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results, compared with previous works, demonstrate that our approach is more effective in mitigating exposure bias in DDPM, DDIM, and DPM-solver. In particular, MDSS achieves an FID score of 3.86 in 100 sample steps of DDIM on the CIFAR-10 dataset, whereas the second best obtains 4.78. Extensive experiments were conducted to compare the performance of current works for solving exposure bias in DDPMs. The experimental results demonstrate that our MDSS performs the best.
Researcher Affiliation Collaboration Zhiyao Ren1, Yibing Zhan2, Liang Ding2, Gaoang Wang3, Chaoyue Wang1, Zhongyi Fan2, Dacheng Tao1 1The University of Sydney, Australia, 2JD Explore Academy, China, 3Zhejiang University, China
Pseudocode Yes Algorithm 1: DDPMs Standard Training Process Algorithm 2: DDPM Standard Inference Process Algorithm 3: Multi-step Denoising Scheduled Sampling
Open Source Code No The code will be available on Git Hub.
Open Datasets Yes We evaluate our method across unconditional image generation tasks on three datasets: CIFAR-10 (Krizhevsky, Hinton et al. 2009), Image Net 64 64 (Deng et al. 2009), and LSUN tower 64 64 (Yu et al. 2015) datasets.
Dataset Splits No The paper does not explicitly provide specific training/test/validation dataset splits (e.g., percentages or sample counts). While it mentions using well-known datasets, it relies on their standard divisions without explicitly stating them within the paper.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It mentions 'computational resources' generally but no concrete hardware.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies, libraries, or programming languages used in the experiments.
Experiment Setup Yes More details regarding training hyperparameters, network architecture, FID evaluating settings, CLIPFID (Kynk a anniemi et al. 2022; Rangwani et al. 2023) results and qualitative comparison can be found in Appendix.