f-FERM: A Scalable Framework for Robust Fair Empirical Risk Minimization

Authors: Sina Baharlouei, Shivam Patel, Meisam Razaviyayn

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments provide an extensive examination of various f-divergences and their suitability as regularizers and also show the consistency of our method across all batch sizes in contrast to existing benchmarks. Similar experiments are carried out for robust training on varying amounts of distributional shifts in data.
Researcher Affiliation Academia University of Southern California (baharlou,razaviya@usc.edu) Department of Electrical Engineering, IIT Bombay (shivamapatel2002@gmail.com)
Pseudocode Yes Algorithm 1 Stochastic Gradient Descent-Ascent (SGDA) for f-FERM
Open Source Code Yes An efficient stochastic implementation of f-FERM is publicly available 1. 1https://github.com/optimization-for-data-driven-science/f-FERM
Open Datasets Yes Adult dataset (Becker & Kohavi, 1996)
Dataset Splits No The paper mentions training and test data but does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) in the main text needed to reproduce the data partitioning.
Hardware Specification No No specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments were provided.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes To run Algorithm 1, we set ηθ and ηα to 10 5 and 10 6 respectively in all experiments. Further, by changing λ, we get different points in the trade-off curve between accuracy and fairness.