Towards Harmless Rawlsian Fairness Regardless of Demographic Prior

Authors: Xuanqian Wang, Jing Li, Ivor Tsang, Yew Soon Ong

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental findings indicate that regression tasks, which are relatively unexplored from literature, can achieve significant fairness improvement through VFair regardless of any prior, whereas classification tasks usually do not because of their quantized utility measurements.
Researcher Affiliation Collaboration 1School of Computer Science and Engineering, Beihang University, China 2Institute of High-Performance Computing, Agency for Science, Technology and Research, Singapore 3Centre for Frontier AI Research, Agency for Science, Technology and Research, Singapore 4 College of Computing and Data Science, Nanyang Technological University, Singapore
Pseudocode Yes Algorithm 1 Harmless Rawlsian Fairness without Demographics via VFair. Input: Training set D = {zi}N i=1, where zi = (xi, yi) X Y Output: Learned model parameterized by θ Θ
Open Source Code Yes The implementation of our method is publicly available at https://github.com/wxqpxw/VFair.
Open Datasets Yes Datasets. Six datasets encompassing binary classification, multi-class classification, and regression are employed. (i) UCI Adult [33], (ii) Law School [34], (iii) COMPAS [35], (iv) Celeb A [36], (v) Communities & Crime (C & C) [37], (vi) Age DB [38].
Dataset Splits Yes A pre-processing method [20] accomplished cost-free fairness through re-weighting training examples based on both fairness-related measures and predictive utility on a validation set.
Hardware Specification Yes All experiments were conducted on Ubuntu 20.04 with one NVIDIA Ge Force RTX 3090 graphics processing unit (GPU), which has a memory capacity of 24 GB.
Software Dependencies No All the deep-learning-based models... conform to a shared neural network framework. ... Throughout these experiments, the Adagrad optimizer was employed. ... we implemented Binary Cross-Entropy, Cross-Entropy, and Mean Square Error for binary classification, multi-class classification, and regression tasks, respectively.
Experiment Setup Yes Specifically, for binary classification tasks, the core neural network architecture consists of an embedding layer followed by two hidden layers, with 64 and 32 neurons, respectively. ... Throughout these experiments, the Adagrad optimizer was employed. ... For the loss function, we implemented Binary Cross-Entropy, Cross-Entropy, and Mean Square Error for binary classification, multi-class classification, and regression tasks, respectively. ... To compare all baselines under the harmless fairness setting, we implement them into the same scheme and select the epoch with the nearest loss compared to a converged ERM.