Trading-off price for data quality to achieve fair online allocation
Authors: Mathieu Molina, Nicolas Gast, Patrick Loiseau, Vianney Perchet
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We propose an algorithm that jointly solves both problems and show that it has a regret bounded by O(T). A key difficulty is that the rewards received by selecting a source are correlated by the fairness penalty, which leads to a need for randomization (despite a stochastic setting). Our algorithm takes into account contextual information available before the source selection, and can adapt to many different fairness notions. We also show that in some instances, the estimates used can be learned on the fly. ... We apply Algorithm 1 to this example, and we compare its performance to (i) the one of an algorithm that has only access to the first source (which is the best source since both sources are symmetric), (ii) a greedy algorithm that only use the first source and selects xt = 1 whenever E[ut | ct1] > 0, and (iii) the offline optimal bounds OPT and static-OPT. The results are presented in Figure 1. |
| Researcher Affiliation | Collaboration | Mathieu Molina Inria, Fair Play Team, Palaiseau, France mathieu.molina@inria.fr Nicolas Gast Univ. Grenoble Alpes, Inria, CNRS Grenoble INP, LIG, 38000 Grenoble, France nicolas.gast@inria.fr Patrick Loiseau Inria, Fair Play Team, Palaiseau, France patrick.loiseau@inria.fr Vianney Perchet CREST, ENSAE, Palaiseau, France Criteo AI Lab, Paris, France vianney.perchet@normalesup.org |
| Pseudocode | Yes | Algorithm 1 Online Fair Allocation with Source Selection; Algorithm 2 Online Fair Allocation with Source Selection General algorithm; Algorithm 3 Online Fair Allocation with Source Selection Other fairness penalty; Algorithm 4 Online Fair Allocation with Source Selection With public contexts |
| Open Source Code | No | The paper does not provide any statement or link indicating that source code for the described methodology is publicly available. |
| Open Datasets | No | The paper describes illustrative examples with statistical properties and parameters, but does not use or provide access information for a publicly available or open dataset for its experiments. |
| Dataset Splits | No | The paper describes illustrative examples with statistical properties and parameters. It does not specify exact train/validation/test splits or cross-validation details for its experiments. |
| Hardware Specification | Yes | The computations were done on a laptop with an i7-10510U and 16 Gb of ram. |
| Software Dependencies | No | The paper does not mention specific software names with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific solvers). |
| Experiment Setup | Yes | We consider a case where the utility ut {-1, 1} and the protected attribute of a user at {-1, 1}. All combination of ut and at are equiprobable: P(ut = u, at = a) = 1/4 for all u, a {-1, 1}. The penalty function is R(δ) = 5|δ|. There are K = 2 sources of information. The first source is capable of identifying good indivduals of group 1: it gives a context ct1 = 1 when (at, ut) = (1, 1) and ct2 = 0 otherwise. The second source does the same for good individuals of group 1: it gives ct2 = 1 when (at, ut) = (-1, 1) and ct2 = 0 otherwise. ... We consider a fairness penalty R(x) = rx^2 with r >= 0, and A = {-1, 1} so that at can only be either 1 or -1, encoding men and women for instance. We suppose that the protected attributes and the utility (ut, at) are distributed according to Pr(at = 1) = Pr(at = -1) = 1/2 and ut ~ N(at, 1). ... Here the sources are monotone in that one contains strictly more information than the other, but is more expensive. Basically we are going to observe the utility ut in all cases, but before observing ut we can pay p to additionally observe at. |