On Error Propagation of Diffusion Models

Authors: Yangming Li, Mihaela van der Schaar

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have conducted extensive experiments on multiple image datasets, showing that our proposed regularization reduces error propagation, significantly improves vanilla DMs, and outperforms previous baselines.
Researcher Affiliation Academia Yangming Li, Mihaela van der Schaar Department of Applied Mathematics and Theoretical Physics University of Cambridge yl874@cam.ac.uk
Pseudocode Yes Algorithm 1: Optimization with Our Proposed Regularization
Open Source Code Yes The source code of this work is publicly available at a personal repository: https://github.com/louisli321/epdm, and our lab repository: https://github.com/vanderschaarlab/epdm.
Open Datasets Yes We train standard diffusion models (Ho et al., 2020) on two datasets: CIFAR-10 (32ˆ32) (Krizhevsky et al., 2009) and Image Net (32ˆ32) (Deng et al., 2009). We conduct experiments on three image datasets: CIFAR-10 (Krizhevsky et al., 2009), Image Net (Deng et al., 2009), and Celeb A (Liu et al., 2015), with image shapes respectively as 32 ˆ 32, 32 ˆ 32, and 64 ˆ 64.
Dataset Splits No The paper does not explicitly state training, validation, and test dataset splits (e.g., percentages or counts) for reproducibility, nor does it explicitly mention a "validation set" in the context of data partitioning.
Hardware Specification Yes All our model run on 2 4 Tesla V100 GPUs and are trained within two weeks.
Software Dependencies No The paper mentions using U-Net as the backbone and other common practices for diffusion models, but it does not specify software dependencies with version numbers (e.g., PyTorch version, CUDA version).
Experiment Setup Yes The configuration of our model follows common practices, we adopt U-Net (Ronneberger et al., 2015) as the backbone and respectively set hyper-parameters T, σt, L, λreg, λnll, ρ as 1000, βt, 5, 0.2, 0.8, 0.003.