Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
FROC: Building Fair ROC from a Trained Classifier
Authors: Avyukta Manjunatha Vummintala, Shantanu Das, Sujit Gujar
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we demonstrate the efficacy of FROC via experiments. We also study the performance of our FROC on multiple real-world datasets with many trained classifiers. 5 Empirical Analysis |
| Researcher Affiliation | Academia | International Institute of Information Technology, Hyderabad EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: FAIRROC ALGORITHM |
| Open Source Code | Yes | Code and Miscellaneous https://github.com/magnetar-iiith/FROC/tree/main |
| Open Datasets | Yes | Datasets: We train different classifiers on the widely-used ADULT (Becker and Kohavi 1996) and COMPAS (Angwin et al. 2022) benchmark datasets, selecting MALE and FEMALE as protected groups in ADULT, and BLACK and OTHERS in COMPAS. |
| Dataset Splits | No | The paper mentions training on ADULT and COMPAS datasets but does not provide specific details on training, validation, or test splits. It only states 'We train C1 on both datasets, C2 and C3 on the Adult dataset, and generate their ROCs for all the protected groups.' without further split information. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or detailed computer specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'sklearn implementations for C2 and C3' but does not provide specific version numbers for sklearn or other software libraries and their versions. |
| Experiment Setup | No | The paper mentions adopting training parameters from a previous work ('For consistent comparison, we adopt the training parameters for base classifiers from (Alghamdi et al. 2022)') but does not explicitly state specific hyperparameter values or training configurations within the main text of this paper. |