FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation

Authors: Christopher Teo, Milad Abdollahzadeh, Xinda Ma, Ngai-Man (Man) Cheung

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on a wide range of t SAs show that our proposed method outperforms SOTA approach s image generation quality, while achieving competitive fairness. In this section, we evaluate our proposed (Fair Queue) against the existing SOTA ITI-GEN [16] over various t SA. Then, we conduct an ablation study by first evaluating the contribution brought by each component for Fair Queue i.e., Prompt queuing, and Attention Scaling.
Researcher Affiliation Academia Christopher T. H. Teo christopher_teo@mymail.sutd.edu.sg Milad Abdollahzadeh milad_abdollahzadeh@sutd.sg Xinda Ma xinda_ma@sutd.edu.sg Ngai-Man Cheung ngaiman_cheung@sutd.edu.sg Singapore University of Technology and Design (SUTD)
Pseudocode No The paper describes its methods through textual explanations and mathematical equations, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes In addition, to facilitate reproducibility we have also provided the anonymous link for the code used in this paper.
Open Datasets Yes Following [16], we utilize the publicly available reference dataset from Celeb A [29], Fair Face [45] and FAIR benchmark [44]. In addition, all datasets used in this paper are publicly available.
Dataset Splits No The paper describes the datasets used and mentions repeating experiments for statistical significance ('We repeat this process 5 times'), but it does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts) for its own experimental setup beyond referencing existing datasets.
Hardware Specification Yes T2I Sample Generation RTX3090 25.0 4.87
Software Dependencies No The paper mentions using 'Stable Diffusion v1.4 [1]' and an 'Adam [48] optimizer' but does not provide specific version numbers for other key ancillary software components like Python, PyTorch/TensorFlow, or CUDA, which are necessary for full reproducibility.
Experiment Setup Yes For sample generation, we follow the recommended diffusion steps of l = 50 and utilize an Attention scale of c = 10 and an Attention Queuing transitioning step =10. Si with a token length of 3 per t SA which is optimized based on a learning rate of lr = 0.01.