Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
LLM4GEN: Leveraging Semantic Representation of LLMs for Text-to-Image Generation
Authors: Mushui Liu, Yuhang Ma, Zhen Yang, Jun Dan, Yunlong Yu, Zeng Zhao, Zhipeng Hu, Bai Liu, Changjie Fan
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments indicate that LLM4GEN significantly improves the semantic alignment of SD1.5 and SDXL, demonstrating increases of 9.69% and 12.90% in color on T2I-Comp Bench, respectively. Moreover, it surpasses existing models in terms of sample quality, image-text alignment, and human evaluation. Experimental results on MSCOCO benchmark are shown in Tab. 2. Evaluation on T2I-Comp Bench. Evaluation on Dense Prompts. User Study. Ablation Studies. |
| Researcher Affiliation | Collaboration | 1College of Information Science & Electronic Engineering, Zhejiang University 2Fuxi AI Lab, Netease Inc. 3The Hong Kong University of Science and Technology (Guangzhou) EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods and processes in textual paragraphs and uses diagrams (e.g., Fig. 3) to illustrate architecture, but it does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an unambiguous statement that the authors are releasing their code for LLM4GEN, nor does it provide a direct link to a source-code repository for their methodology. It mentions using 'official weights and code provided' for a third-party model (SUR-Adapter*), but not for their own work. |
| Open Datasets | Yes | We use 10M text-image pairs collected from LAION2B(Schuhmann et al. 2021) and Internet. Evaluation Benchmarks We comprehensively evaluate our method via four primary benchmarks, e.g. MSCOCO (Lin et al. 2014), T2I-Comp Bench (Huang et al. 2023), our proposed Dense Prompts benchmark, and User Study. |
| Dataset Splits | No | The paper states that 10M text-image pairs were collected for training and 2M high-quality data was used for further training, and that evaluation was done on MSCOCO, T2I-Comp Bench, and Dense Prompts benchmarks. However, it does not provide specific training/validation/test splits (e.g., percentages or exact counts) for the 10M or 2M training data needed to reproduce the experiments. |
| Hardware Specification | Yes | Training is conducted on 8NVIDIA A100 GPUs with the learning rates of 2e-5 and 1e-5 for LLM4GENSD1.5 and LLM4GENSDXL, respectively. |
| Software Dependencies | No | The paper mentions models and encoders like CLIP text encoder (CLIP Vi T-L/14), Llama-2 7B/13B, and T5-XL, but it does not provide specific version numbers for programming languages, libraries, or frameworks (e.g., Python, PyTorch, CUDA) used in the implementation. |
| Experiment Setup | Yes | Training is conducted on 8NVIDIA A100 GPUs with the learning rates of 2e-5 and 1e-5 for LLM4GENSD1.5 and LLM4GENSDXL, respectively. The batch size is set to 256 and 128. The training steps are set to 20k and 40k. Additionally, we further train LLM4GENSDXL using 2M high-quality data with 1024 resolution. During inference, we utilize DDIM sampler (Song, Meng, and Ermon 2020) for sampling with 50 steps and the classifier free guidance scale to 7.5. |