On Adversarial Robustness of Demographic Fairness in Face Attribute Recognition

Authors: Huimin Zeng, Zhenrui Yue, Lanyu Shang, Yang Zhang, Dong Wang

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experimental results show the effectiveness of both our proposed attack and defense methods across various model architectures and FAR applications.
Researcher Affiliation Academia Huimin Zeng , Zhenrui Yue , Lanyu Shang , Yang Zhang , Dong Wang Unversity of Illinois at Urbana-Champaign {huiminz3, zhenrui3, lshang3, yzhangnd, dwang24}@illinois.edu
Pseudocode No The paper includes mathematical formulations and descriptions of processes, but it does not present any structured pseudocode or algorithm blocks.
Open Source Code Yes For reproducibility, all details (e.g., experimental setup, attacker specification) and code are uploaded within the supplementary materials of this submission.
Open Datasets Yes We use the large scale Celeb A [Liu et al., 2015] and Fair Face dataset [Karkkainen and Joo, 2021] for our experiments.
Dataset Splits No The paper states: 'To simulate the attack on fair classifiers and demonstrate the bias introduced by various fairness attacks, we sample two fair subsets to train and test the victim models.' While it mentions train and test sets, it does not explicitly refer to a separate 'validation' dataset split.
Hardware Specification No The paper does not specify the exact hardware components (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper does not specify the version numbers for any software dependencies (e.g., specific Python, PyTorch, or library versions).
Experiment Setup No The paper states: 'For reproducibility, all details (e.g., experimental setup, attacker specification) and code are uploaded within the supplementary materials of this submission.' This indicates that specific experimental setup details, such as hyperparameters or training configurations, are not provided in the main text of the paper.