Equal Improvability: A New Fairness Notion Considering the Long-term Impact
Authors: Ozgur Guldogan, Yuchen Zeng, Jy-yong Sohn, Ramtin Pedarsani, Kangwook Lee
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experiments on both synthetic and real datasets, we demonstrate that the proposed EI-regularized algorithms encourage us to find a fair classifier in terms of EI. Additionally, we ran experiments on dynamic scenarios which highlight the advantages of our EI metric in equalizing the distribution of features across different groups, after the rejected samples make some effort to improve. |
| Researcher Affiliation | Academia | University of California, Santa Barbara Yonsei University University of Wisconsin-Madison |
| Pseudocode | Yes | Algorithm 1 Pseudocode for achieving EI |
| Open Source Code | Yes | Codes are available in a Git Hub repository 1. 1https://github.com/guldoganozgur/ei_fairness |
| Open Datasets | Yes | Datasets. We perform the experiments on one synthetic dataset, and two real datasets: German Statlog Credit (Dua & Graff, 2017), and ACSIncome-CA (Ding et al., 2021). |
| Dataset Splits | No | The paper states "The ratio of the training versus test data is 4:1" and "We perform cross-validation on the training set to find the best hyperparameter." While cross-validation is used for hyperparameter tuning, a separate, explicit validation dataset split (e.g., 80/10/10) is not provided. |
| Hardware Specification | No | The paper does not specify any hardware used for running the experiments (e.g., specific GPU or CPU models). |
| Software Dependencies | No | The paper mentions software components like "Adam optimizer" and "Gaussian kernel" but does not provide specific version numbers for any of them, which is required for reproducibility. |
| Experiment Setup | Yes | For all experiments, we use the Adam optimizer and cross-entropy loss. We perform cross-validation on the training set to find the best hyperparameter. We provide statistics for five trials having different random seeds. For the KDE-based approach, we use the Gaussian kernel. ... The maximum effort δ for this dataset is set to 0.5. ... We set the maximum effort δ = 1... We set the maximum effort δ = 3. |