On the Maximal Local Disparity of Fairness-Aware Classifiers
Authors: Jinqiu Jin, Haoxuan Li, Fuli Feng
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both tabular and image datasets validate that our fair training algorithm can achieve superior fairness-accuracy trade-offs. |
| Researcher Affiliation | Academia | 1University of Science and Technology of China 2Peking University. |
| Pseudocode | Yes | Algorithm 1 Exact Calculation of \ MCDP(ϵ); Algorithm 2 Approximate Calculation of \ MCDP(ϵ); Algorithm 3 Diff MCDP: Bi-Level Optimization Algorithm |
| Open Source Code | Yes | The code implementation of this paper is available at https://github.com/mitao-cat/icml24_mcdp. |
| Open Datasets | Yes | Adult (Kohavi, 1996) is a popular UCI dataset; Bank (Moro et al., 2014) dataset is collected from a Portuguese banking institution s marketing campaigns; Celeb A (Liu et al., 2015) dataset contains over 20K face images of celebrities |
| Dataset Splits | Yes | we select the fairest model which achieves at least 95% of the vanilla model s accuracy (i.e., AP of ERM) on validation set |
| Hardware Specification | Yes | We conduct experiments with a 96-core Intel CPU (Intel(R) Xeon(R) Platinum 8268 @ 2.90GHz * 2) and a Nvidia-2080Ti GPU (11 GB memory). |
| Software Dependencies | No | The paper mentions "Pytorch (Paszke et al., 2019)" but does not specify a version number for the software used in their experiments. |
| Experiment Setup | Yes | The batch size for tabular and image datasets are set as 1024 and 128, respectively, and the total training step is set as 150. We use the Adam optimizer with initial learning rate 0.001, which is decayed by the piecewise strategy (i.e., Step LR scheduler in Pytorch (Paszke et al., 2019)) during training. We summarize the detailed hyper-parameters in Table 3. |