Fair-CDA: Continuous and Directional Augmentation for Group Fairness

Authors: Rui Sun, Fengwei Zhou, Zhenhua Dong, Chuanlong Xie, Lanqing Hong, Jiawei Li, Rui Zhang, Zhen Li, Zhenguo Li

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that Fair-CDA consistently outperforms state-of-the-art methods on widely-used benchmarks, e.g., Adult, Celeb A and Movie Lens. Experiments on Public Datasets We evaluate Fair-CDA on tabular dataset Adult (Dua and Graff 2017), vision dataset Celeb A (Liu et al. 2018), and recommender dataset Movie Lens (Harper and Konstan 2015). We demonstrate the effectiveness of Fair-CDA across diverse tasks and task models.
Researcher Affiliation Collaboration 1 The Future Network of Intelligence Institute, The Chinese University of Hong Kong (Shenzhen) 2 School of Science and Engineering, The Chinese University of Hong Kong (Shenzhen) 3 Huawei Noah s Ark Lab 4 Beijing Normal University 5 Tsinghua University
Pseudocode Yes Algorithm 1: Fair-CDA: Continuous and Directional Augmentation for Group Fairness
Open Source Code No The paper states 'Our framework is implemented with Py Torch 1.4 (under BSD license), Python 3.7, and CUDA v9.0.' but does not provide a link or explicit statement about the availability of its own source code.
Open Datasets Yes We evaluate Fair-CDA on tabular dataset Adult (Dua and Graff 2017), vision dataset Celeb A (Liu et al. 2018), and recommender dataset Movie Lens (Harper and Konstan 2015).
Dataset Splits Yes We adjust γ on the Adult dataset (Dua and Graff 2017) to get the best accuracy and fairness trade-off on the validation set and then adopt the same value which is 0.9 for all the datasets.
Hardware Specification Yes We conducted experiments on NVIDIA Tesla V100.
Software Dependencies Yes Our framework is implemented with Py Torch 1.4 (under BSD license), Python 3.7, and CUDA v9.0.
Experiment Setup Yes Our method introduces three additional hyper-parameters: two weights of different losses β and γ, and perturbation budget λ. In the experiment, we set β according to the initial loss values to make different loss values in the same magnitude range. We adjust γ on the Adult dataset (Dua and Graff 2017) to get the best accuracy and fairness trade-off on the validation set and then adopt the same value which is 0.9 for all the datasets. Our method balances the prediction accuracy and fairness via adjusting the perturbation strength λ. Input: Training data {(xi, yi, ai)}n i=1, batch sizes b, learning rate η1, η2, perturbation strength λ, weights γ, β, iteration number T, S