Reflected Diffusion Models

Authors: Aaron Lou, Stefano Ermon

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On standard image benchmarks, our method is competitive with or surpasses the state of the art without architectural modifications and, for classifier-free guidance, our approach enables fast exact sampling with ODEs and produces more faithful samples under high guidance weight. On common image generation benchmarks, our results are competitive with or surpass the current state of the art. In particular, on unconditional CIFAR-10 generation (Krizhevsky, 2009), we achieve a state of the art Inception Score of 10.42 and a comparable FID score of 2.72.
Researcher Affiliation Academia 1Department of Computer Science, Stanford University. Correspondence to: Aaron Lou <aaronlou@stanford.edu>.
Pseudocode No The paper does not contain explicit pseudocode or algorithm blocks.
Open Source Code Yes Code Link: https://github.com/louaaron/Reflected-Diffusion/
Open Datasets Yes On common image generation benchmarks, our results are competitive with or surpass the current state of the art. In particular, on unconditional CIFAR-10 generation (Krizhevsky, 2009)... We test Reflected Diffusion Models on CIFAR-10 (Krizhevsky, 2009) and Image Net32 (van den Oord et al., 2016) for likelihoods, both without data augmentation.
Dataset Splits Yes On common image generation benchmarks, our results are competitive with or surpass the current state of the art. In particular, on unconditional CIFAR-10 generation (Krizhevsky, 2009)... We test Reflected Diffusion Models on CIFAR-10 (Krizhevsky, 2009) and Image Net32 (van den Oord et al., 2016) for likelihoods, both without data augmentation.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running experiments. It generally mentions 'neural network computation' and refers to models without hardware specifications.
Software Dependencies No The paper mentions software components such as 'Adam', 'Swish activation', 'Layer Norm', and 'RK45 solver' but does not provide specific version numbers for any of these or other key software dependencies.
Experiment Setup Yes We exactly follow Song et al. (2021b) for both models and training hyperparameters. The only differences are that we set σ1 = 5 instead of 50 (for the VE SDE)... We sample with 1000 predictor (Reflected Euler-Maruyama) steps with 1000 corrector (Reflected Langevin) steps (Song et al., 2021b). We use a signal-to-noise ratio of 0.03. We train with Adam (Kingma & Ba, 2014) at a 2 10 4 learning rate.