Fairness and Explainability: Bridging the Gap towards Fair Model Explanations

Authors: Yuying Zhao, Yu Wang, Tyler Derr

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets demonstrate the effectiveness of our proposed CFA and highlight the importance of considering fairness from the explainability perspective.
Researcher Affiliation Academia Vanderbilt University {yuying.zhao, yu.wang.1, tyler.derr}@vanderbilt.edu
Pseudocode Yes Algorithm 1: Comprehensive Fairness Algorithm (CFA)
Open Source Code Yes Our code: https://github.com/Yuying Zhao/Fair Explanations-CFA.
Open Datasets Yes We validate the proposed approach on four real-world benchmark datasets: German3, Recidivism (Jordan and Freiburger 2015), Math and Por (Cortez and Silva 2008), which are commonly adopted for fair ML (Le Quy et al. 2022).
Dataset Splits Yes For a fair comparison, we record the best model hyperparameters based on the overall score in the validation set.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as CPU/GPU models, memory, or cloud instance types.
Software Dependencies No The paper mentions various models and explainers (e.g., MLP, Reduction, Reweight, NIFTY, Fair VGNN, Graph Lime) but does not provide specific version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used in the experiments.
Experiment Setup No The paper mentions using 'optimal hyperparameters' but does not explicitly list the specific values of these hyperparameters (e.g., learning rate, batch size, number of epochs) or other concrete training configurations in the main text.