Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Eliminating Oversaturation and Artifacts of High Guidance Scales in Diffusion Models
Authors: Seyedmorteza Sadat, Otmar Hilliges, Romann Weber
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments, we demonstrate that APG is compatible with various conditional diffusion models and samplers, leading to improved FID, recall, and saturation scores while maintaining precision comparable to CFG, making our method a superior plug-and-play alternative to standard classifier-free guidance. |
| Researcher Affiliation | Collaboration | 1ETH Zรผrich, 2Disney Research|Studios EMAIL {romann.weber}@disneyresearch.com |
| Pseudocode | Yes | APG is easy to implement, and we provide the source code in Algorithm 1 (appendix). |
| Open Source Code | Yes | The source code for implementing APG is provided in Algorithm 1, and Appendix D outlines additional implementation details, including the hyperparameters used in the main experiments. |
| Open Datasets | Yes | We mainly experiment with text-to-image generation with Stable Diffusion (Rombach et al., 2022) and class-conditional Image Net (Russakovsky et al., 2015) generation using EDM2 (Karras et al., 2023) and Di T-XL/2 (Peebles & Xie, 2022). For text-to-image models, the FID is evaluated using the evaluation subset of MS COCO 2017 (Lin et al., 2014). |
| Dataset Splits | Yes | The FID is computed using 10,000 generated images and the whole training set for classconditional Image Net models. For text-to-image models, the FID is evaluated using the evaluation subset of MS COCO 2017 (Lin et al., 2014). |
| Hardware Specification | Yes | Specifically, in the case of Stable Diffusion XL, the forward pass through the diffusion network takes approximately 130 milliseconds on an RTX 3090 GPU for a single image, while the guidance step requires only about 0.45 milliseconds. |
| Software Dependencies | No | The paper mentions 'Py Torch implementation' for Algorithm 1 but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | For all experiments, we use the default diffusion sampler from each model (e.g., Euler scheduler for Stable Diffusion XL) along with pretrained checkpoints and corresponding codebases to ensure consistency in weights and the sampling process with the original frameworks. The hyperparameters used for the main experiment are given in Table 10. |