Metric-Free Individual Fairness in Online Learning
Authors: Yahav Bechavod, Christopher Jung, Steven Z. Wu
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We provide a general reduction framework that reduces online classification in our model to standard online classification, which allows us to leverage existing online learning algorithms to achieve sub-linear regret and number of fairness violations. Surprisingly, in the stochastic setting where the data are drawn independently from a distribution, we are also able to establish PAC-style fairness and accuracy generalization guarantees (Rothblum and Yona (2018)), despite only having access to a very restricted form of fairness feedback. |
| Researcher Affiliation | Academia | Yahav Bechavod Hebrew University yahav.bechavod@cs.huji.ac.il Christopher Jung University of Pennsylvania chrjung@seas.upenn.edu Zhiwei Steven Wu Carnegie Mellon University zstevenwu@cmu.edu |
| Pseudocode | Yes | Algorithm 1: Online Fair Batch Classification FAIR-BATCH, Algorithm 2: Online Batch Classification BATCH, Algorithm 3: Reduction from Online Fair Batch Classification to Online Batch Classification |
| Open Source Code | No | The paper does not contain any explicit statement about releasing open-source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper is theoretical and does not use or provide access information for any specific public datasets for experimental validation. It refers to abstract data distributions and i.i.d. data for theoretical analysis. |
| Dataset Splits | No | The paper is theoretical and does not describe any specific experimental setup with dataset splits (training, validation, or testing). |
| Hardware Specification | No | The paper focuses on theoretical contributions and does not describe any specific hardware used for experiments. |
| Software Dependencies | No | The paper describes algorithms (e.g., exponential weights, CONTEXT-FTPL) but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, specific libraries). |
| Experiment Setup | No | The paper is theoretical and does not describe any experimental setup details such as hyperparameters, training configurations, or system-level settings. |