Achieving Long-Term Fairness in Sequential Decision Making

Authors: Yaowei Hu, Lu Zhang9549-9557

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The empirical evaluation shows the effectiveness of the proposed algorithm on synthetic and semi-synthetic temporal datasets.
Researcher Affiliation Academia Yaowei Hu, Lu Zhang University of Arkansas {yaoweihu, lz006}@uark.edu
Pseudocode Yes Algorithm 1: Repeated Risk Minimization
Open Source Code Yes The code and hyperparameter settings are available online: https://github.com/yaoweihu/Achieving-Long-term-Fairness.
Open Datasets Yes Semi-synthetic Data. We use the Taiwan credit card dataset (Yeh and Lien 2009) as the initial data at t = 1.
Dataset Splits No The paper describes a 'training process' but does not explicitly provide details about train/validation/test dataset splits, specific percentages, or counts for reproducibility.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions software like PyTorch and CVXPY but does not provide specific version numbers for these software dependencies, which is required for reproducibility.
Experiment Setup Yes The code and hyperparameter settings are available online: https://github.com/yaoweihu/Achieving-Long-term-Fairness. For our algorithm, we use the logistic loss function for the surrogate function ϕ and the linear model for the decision model. All algorithms use the l2-regularization which can equip the logistic loss function with strong convexity. In our algorithm, Re LU activation function is adopted to ensure that the fairness constraints are always non-negative, and we adopt Py Torch (Paszke et al. 2019) to implement optimization with Adam optimizer.