Monotone Individual Fairness

Authors: Yahav Bechavod

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We revisit the problem of online learning with individual fairness, where an online learner strives to maximize predictive accuracy while ensuring that similar individuals are treated similarly. ... Using our generalized framework, we present an oracle-efficient algorithm guaranteeing a bound of O(T 3 4 ) simultaneously for regret and number of fairness violations. ... In both settings, our algorithms improve on the best known bounds for oracle-efficient algorithms.
Researcher Affiliation Academia 1Department of Computer and Information Sciences, University of Pennsylvania.
Pseudocode Yes Algorithm 1 Online Learning with Individual Fairness; Algorithm 2 Reduction to Context-FTPL for Online Learning with Individual Fairness; Algorithm 3 Online Learning with Individual Fairness and Partial Information; Algorithm 4 Reduction to Context-Semi-Bandit-FTPL for Online Learning with Individual Fairness and Partial Information
Open Source Code No The paper does not provide any statements or links indicating the release of open-source code for the described methodology.
Open Datasets No The paper describes a theoretical framework for online learning but does not specify any particular dataset used for training or provide access information for one.
Dataset Splits No The paper focuses on theoretical bounds and algorithm design, and as such, it does not specify training/validation/test dataset splits.
Hardware Specification No The paper is theoretical and does not describe any specific hardware used for running experiments.
Software Dependencies No The paper mentions theoretical frameworks like 'Context-FTPL' and 'Context-Semi-Bandit-FTPL' but does not list any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on algorithm design and bounds; therefore, it does not describe specific experimental setup details such as hyperparameters or system-level training settings.