On the Problem of Underranking in Group-Fair Ranking
Authors: Sruthi Gorantla, Amit Deshpande, Anand Louis
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 3. Experimental Validation In this section, we give empirical observations about three broad questions (i) Is there a trade-off between underranking and group fairness in the real-world datasets? (ii) How effective is underranking in choosing a group-fair ranking? (iii) Does ALG achieve best trade-off between group fairness and underranking? |
| Researcher Affiliation | Collaboration | 1Department of Computer Science and Automation, Indian Institute of Science, Bangalore, India. 2Microsoft Research, Bangalore, India. |
| Pseudocode | Yes | Algorithm 1 ALG Input: A true ranking of the N items and parameters l, βl, 8l 2 [ ], and k satisfying the conditions in Theorem 2.5. |
| Open Source Code | No | The implementation of the algorithm proposed in this paper is also made public4. (Footnote 4: 'Implementation of ALG' without a link) |
| Open Datasets | Yes | We experiment on two real-world datasets. 1. German Credit Risk dataset consists of credit risk scoring of 1000 adult German residents (Dua & Graff, 2017) along with their demographic information such as personal status, gender, age, etc. as well as financial status such as credit history, property, housing, job etc. ... 2. COMPAS1 recidivism dataset consists of violent recidivism assessment of nearly 7000 criminal defendants based on a questionnaire. Angwin et al. (2016) have analysed this tool... We use the processed subsets of German credit risk and COMPAS recidivism datasets3. (Footnote 3: https://github.com/Data Responsibly/Fair Rank/tree/master/datasets) |
| Dataset Splits | No | The paper describes a re-ranking algorithm that takes a given 'true ranking' as input and modifies it. It does not involve traditional machine learning training, and therefore does not specify train/validation/test splits for its experiments. Evaluation is done on the top k ranks of the re-ranked output. |
| Hardware Specification | Yes | The experiments were run on a Dual Intel Xeon 4110 processor consisting of 16 cores (32 threads), with a clock speed of 2.1 GHz and DRAM of 128GB. |
| Software Dependencies | No | The paper describes the algorithms and their implementation, but it does not specify any software dependencies or their version numbers (e.g., programming languages, libraries, or frameworks with specific versions). |
| Experiment Setup | Yes | In ALG and both the baselines, we choose k = 100. ... ALG is also run with group fairness constraints ( = (1, 1), β = (p1 + δ, 0), k = 100), and the parameter = 0.4. |