Analyzing and Combating Attribute Bias for Face Restoration
Authors: Zelin Li, Dan Zeng, Xiao Yan, Qiaomu Shen, Bo Tang
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To this end, we thoroughly analyze attribute bias with extensive experiments and find that two major causes are the lack of attribute information in LR faces and bias in the training data. Moreover, we propose the Debias FR framework to produce HR faces with high image quality and accurate facial attributes. Experiment results show that Debias FR has comparable image quality but significantly smaller attribute bias when compared with state-of-the-art FR methods. |
| Researcher Affiliation | Academia | Zelin Li1,2 , Dan Zeng1,2 , Xiao Yan2 , Qiaomu Shen1,2 , Bo Tang1,2 1Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology 2Department of Computer Science and Engineering, Southern University of Science and Technology {lizl2017@mail., zengd@, yanx@,shenqm@,tangb3@}sustech.edu.cn |
| Pseudocode | No | The paper describes the model architecture and training strategy but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Corresponding author. Code and supplementary materials are available at the link: https://github.com/Seeyn/Debias FR. |
| Open Datasets | Yes | Training data. We use the FFHQ-Aging dataset [Or-El et al., 2020] from FFHQ [Karras et al., 2019] as our training data. Compared to FFHQ, FFHQ-Aging removes images with large challenges, such as low-confidence annotation predictions, large pose variations, and severe face occlusion. It consists of 53831 images and is annotated with both gender and age. |
| Dataset Splits | No | The paper mentions training data (FFHQ-Aging) and test data (Celeb A-HQ, IMDB-WIKI, COX) but does not specify any training/validation/test dataset splits with percentages or sample counts for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory, or cloud instance types used for running its experiments. |
| Software Dependencies | No | The paper mentions using pre-trained models like CLIP [Radford et al., 2021] and Style GAN [Karras et al., 2019] but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | Specifically, the factor k, r, σ, q are randomly sampled from [0,0.1], [0.8,20], [0,20] and [60,100], respectively. The implementation details, like parameters setting and weights of the loss terms, are placed in the supplementary materials. |