Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
You Only Sample Once: Taming One-Step Text-to-Image Synthesis by Self-Cooperative Diffusion GANs
Authors: Yihong Luo, Xiaolong Chen, Xinghua Qu, Tianyang Hu, Jing Tang
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that YOSO achieves the state-of-the-art one-step generation performance even with Low-Rank Adaptation (Lo RA) fine-tuning. In particular, we show that the YOSO-Pix Art-α can generate images in one step trained on 512 resolution, with the capability of adapting to 1024 resolution without extra explicit training, requiring only ~10 A800 days for fine-tuning. Our code is available at: https://github.com/Luo-Yihong/YOSO. |
| Researcher Affiliation | Collaboration | Yihong Luo1, Xiaolong Chen2, Xinghua Qu3, Tianyang Hu4, Jing Tang2,1 1 HKUST 2 HKUST(GZ) 3 Bytedance Seed 4 NUS |
| Pseudocode | Yes | Algorithm 1 YOSO training from scratch. |
| Open Source Code | Yes | Our code is available at: https://github.com/Luo-Yihong/YOSO. |
| Open Datasets | Yes | We evaluate the performance of the proposed YOSO on CIFAR-10 (Yu et al., 2015) to verify its effectiveness under both training from scratch and fine-tuning settings. We switch the pretrained Pix Art-α to v-prediction by the proposed technique introduced in Sec. 5.1, followed by training on the Journey DB dataset (Pan et al., 2023) with resizing as 512 resolution. We employ Aesthetic Score (Ae S) (Schuhmann et al., 2022) to evaluate image quality and adopt the Human Preference Score (HPS) v2.1 (Wu et al., 2023) to evaluate the image-text alignment and human preference. For the evaluation, we evaluate the HPS score on its benchmark, and we evaluate other metrics based on COCO-5k (Lin et al., 2014) datasets. Table 6: Comparison of different diffusion-gan hybrid methods on FFHQ-1024. Table 5: Training from scratch on Image Net-64. |
| Dataset Splits | No | The paper mentions using datasets like CIFAR-10, Journey DB, and COCO-5k but does not specify the train/test/validation splits used for these datasets within the paper's text. It only states how some data was used for training or evaluation without explicit splitting ratios or counts. For example: "For full fine-tuning, we train YOSO on the Journey DB dataset (Pan et al., 2023), by resizing to 512 resolution. And we only use the square image." and "For the evaluation, we evaluate the HPS score on its benchmark, and we evaluate other metrics based on COCO-5k (Lin et al., 2014) datasets." |
| Hardware Specification | Yes | requiring only ~10 A800 days for fine-tuning. |
| Software Dependencies | No | The paper mentions using optimizers like Adam and AdamW with their beta parameters, but it does not specify any software versions for libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages (e.g., Python). |
| Experiment Setup | Yes | We apply a batch-size of 256 and a constant learning rate of 2e-5 during training. Specifically, we let k = 250 and m = 25 in experiments. For the generator, we use the Adam optimizer with β1 = 0.9 and β2 = 0.999; for the discriminator, we use the Adam optimizer with β1 = 0. and β2 = 0.999. We apply gradient norm clipping with a value of 1.0 for the generator only. We apply EMA with a coefficient of 0.9999 for the generator. We let the λt = SNR(t) and λcon t = 1 1 SNR(t) 1 SNR(t 1) . |