Fairness in Learning: Classic and Contextual Bandits

Authors: Matthew Joseph, Michael Kearns, Jamie H. Morgenstern, Aaron Roth

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In the classic stochastic bandits problem we provide a provably fair algorithm based on chained confidence intervals, and prove a cumulative regret bound with a cubic dependence on the number of arms. We further show that any fair algorithm must have such a dependence, providing a strong separation between fair and unfair learning that extends to the general contextual case. In the general contextual case, we prove a tight connection between fairness and the KWIK (Knows What It Knows) learning model: a KWIK algorithm for a class of functions can be transformed into a provably fair contextual bandit algorithm and vice versa. This tight connection allows us to provide a provably fair algorithm for the linear contextual bandit problem with a polynomial dependence on the dimension, and to show (for a different class of functions) a worst-case exponential gap in regret between fair and non-fair learning algorithms.
Researcher Affiliation Academia University of Pennsylvania, Department of Computer and Information Science majos, mkearns, jamiemor, aaroth@cis.upenn.edu
Pseudocode Yes 1: procedure FAIRBANDITS(δ) 2: S0 {1, . . . , k} Initialize the active set ... 1: procedure KWIKTOFAIR(δ, T) 2: δ min(δ, 1 T ) k T 2 , ϵ arg minϵ(max(ϵ T, k m(ϵ, δ )))
Open Source Code No The paper does not contain any explicit statement about providing open-source code for the described methodology, nor does it provide links to a code repository.
Open Datasets No The paper is theoretical and does not use or refer to specific publicly available datasets for training, but rather defines abstract settings and functions for its theoretical analysis.
Dataset Splits No The paper is theoretical and does not involve empirical experiments requiring dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not involve empirical experiments, therefore no specific hardware specifications are provided.
Software Dependencies No The paper is theoretical and describes algorithms and mathematical concepts, but it does not specify any software dependencies with version numbers for empirical replication.
Experiment Setup No The paper is theoretical and focuses on algorithm design and proofs rather than empirical experiments, hence it does not provide details on experimental setup such as hyperparameters or training configurations.