FairFed: Enabling Group Fairness in Federated Learning

Authors: Yahya H. Ezzeldin, Shen Yan, Chaoyang He, Emilio Ferrara, A. Salman Avestimehr

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate Fair Fed empirically versus common baselines for fair ML and federated learning and demonstrate that it provides fairer models, particularly under highly heterogeneous data distributions across clients.
Researcher Affiliation Academia University of Southern California (USC) yessa@usc.edu, shenyan@usc.edu, chaoyang.he@usc.edu, emiliofe@usc.edu, avestime@usc.edu
Pseudocode Yes Algorithm 1: Fair Fed Algorithm (tracking EOD)
Open Source Code No The paper states 'We developed Fair Fed using Fed ML (He et al. 2020), which is a research-friendly FL library for exploring new algorithms,' indicating they used an existing library, but there is no explicit statement about releasing their own implementation code for Fair Fed or a link to it.
Open Datasets Yes We use two binary decision datasets that are widely investigated in fairness literature: the Adult (Dua and Graff 2017) dataset and Pro Publica COMPAS dataset (Larson et al. 2016). In our experiments, we use the US Census data to present the performance of our Fair Fed approach in a distributed learning application with a natural data partitioning. Our experiments are performed on the ACSIncome dataset (Ding et al. 2021).
Dataset Splits No The paper describes the datasets used and different heterogeneity levels but does not explicitly provide the specific training, validation, or test dataset splits (e.g., percentages or sample counts) used for reproducibility.
Hardware Specification Yes We use a server with AMD EPYC 7502 32-Core CPU Processor, and use a parallel training paradigm, where each client is handled by an independent process using MPI (message passing interface).
Software Dependencies No The paper mentions using 'Fed ML (He et al. 2020)' and 'MPI (message passing interface)' but does not provide specific version numbers for these or any other software dependencies like Python or PyTorch.
Experiment Setup Yes We also evaluate how the trade-off between fairness and accuracy changes with the fairness budget β in Fair Fed (see equation (6)). Higher values of β result in fairness metrics having a higher impact on the model optimization, while a lower β results in a reduced perturbation to the default Fed Avg weights due to fair training; note that at β = 0, Fair Fed is equivalent to Fed Avg, as the initial weights ω0 k are unchanged.