Multi-Dimensional Fair Federated Learning

Authors: Cong Su, Guoxian Yu, Jun Wang, Hui Li, Qingzhong Li, Han Yu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental evaluations based on three benchmark datasets show significant advantages of m Fair FL compared to seven state-of-the-art baselines.In this section, we conduct experiments to evaluate the effectiveness of m Fair FL using three real-world datasets: Adult (Dua and Graff 2017), COMPAS (Pro Publica. 2016), and Bank (Moro, Cortez, and Rita 2014).
Researcher Affiliation Academia 1School of Software, Shandong University, Jinan, China 2SDU-NTU Joint Centre for AI Research, Shandong University, Jinan, China 3School of Computer Science and Engineering, Nanyang Technological University, Singapore
Pseudocode Yes Algorithm 1 in the Supplementary file outlines the main procedures of m Fair FL.
Open Source Code No The paper does not explicitly state that the source code for the described methodology is publicly available, nor does it provide a link to a code repository.
Open Datasets Yes In this section, we conduct experiments to evaluate the effectiveness of m Fair FL using three real-world datasets: Adult (Dua and Graff 2017), COMPAS (Pro Publica. 2016), and Bank (Moro, Cortez, and Rita 2014).
Dataset Splits Yes We split the data among five FL clients in an non-iid manner.1 For the purpose of comparative analysis, we consider several baseline methods, categorized into three groups: (i) independent training of the fair model within a decentralized context (Ind Fair); (ii) fair model training via Fed Avg (Fed Avg-f); (iii) fair model training within a centralized setting (Cen Fair). Three SOTA FL with group fairness: (i) Fed FB (Zeng, Chen, and Lee 2021), which adjusts each sensitive group s weight for aggregation; (ii) FPFL (G alvez et al. 2021), which enforces fairness by solving the constrained optimization; (iii) Fair Fed (Ezzeldin et al. 2023), which adjusts clients weights based on locally and global trends of fairness mtrics. In addition to these, we evaluate our proposed m Fair FL against cutting-edge FL methods that emphasize client fairness, including: (i) q-FFL (Li et al. 2019), which adjusts client aggregation weights using a hyperparameter q; (ii) DRFL (Zhao and Joshi 2022), which automatically adapts client weights during model aggregation; (iii) Ditto (Li et al. 2021), a hybrid approach that merges multitask learning with FL to develop personalized models for each client; and (iv) Fed MGDA+ (Hu et al. 2022), which frames FL as a multi-objective optimization problem. Throughout our experiments, we adhere to a uniform protocol of 10 communication rounds and 20 local epochs for all FL algorithms. For other methods, we execute 200 epochs, leveraging cross-validation techniques on the training sets to determine optimal hyperparameters for the comparative methods.Specifically, we group the datsets by sensitive attributes, and randomly assign 30%, 30%, 20%, 10%, 10% of the samples from group 0 and 10%, 20%, 20%, 20%, 30% of the samples from group 1 to five clients, respectively.
Hardware Specification Yes We use the same server (Ubuntu 18.04.5, Intel Xeon Gold 6248R and Nvidia RTX 3090) to perform experiments.
Software Dependencies No The paper mentions 'Ubuntu 18.04.5' as the operating system but does not provide specific version numbers for other ancillary software dependencies like libraries, frameworks, or solvers used for the experiments.
Experiment Setup Yes Throughout our experiments, we adhere to a uniform protocol of 10 communication rounds and 20 local epochs for all FL algorithms. For other methods, we execute 200 epochs, leveraging cross-validation techniques on the training sets to determine optimal hyperparameters for the comparative methods. All algorithms are grounded in Re LU neural networks with four hidden layers, thereby ensuring an equal count of model parameters.