Multicalibration: Calibration for the (Computationally-Identifiable) Masses
Authors: Ursula Hebert-Johnson, Michael Kim, Omer Reingold, Guy Rothblum
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We develop and study multicalibration as a new measure of fairness in machine learning that aims to mitigate inadvertent or malicious discrimination that is introduced at training time (even from ground truth data). We demonstrate that in many settings this strong notion of protection from discrimination is provably attainable and aligned with the goal of accurate predictions. Along the way, we present algorithms for learning a multicalibrated predictor, study the computational complexity of this task, and illustrate tight connections to the agnostic learning model. |
| Researcher Affiliation | Academia | 1Computer Science Department, Stanford University, Stanford, CA 2Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot, Israel. |
| Pseudocode | Yes | Algorithm 1 Learning a (C, α)-multicalibrated predictor |
| Open Source Code | No | The paper does not provide any statements or links regarding the availability of open-source code for the methodology described. |
| Open Datasets | No | The paper refers to 'a small sample of ground truth data D' but does not specify a publicly available dataset or provide access information (link, citation, repository). |
| Dataset Splits | No | The paper is theoretical and does not specify train/validation/test dataset splits or mention a splitting methodology. |
| Hardware Specification | No | The paper does not mention any specific hardware used for running experiments. This is typical for a theoretical paper. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe any specific experimental setup details, such as hyperparameters or system-level training settings. |