Probably Approximately Metric-Fair Learning
Authors: Gal Yona, Guy Rothblum
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We develop a relaxed approximate metric-fairness framework for machine learning, where fairness does generalize from the sample to the underlying population, and present polynomial-time fair learning algorithms in this framework. We proceed to describe our setting and contributions. |
| Researcher Affiliation | Academia | 1Weizmann Institute of Science, Rehovot, Israel. |
| Pseudocode | No | The paper does not contain pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link regarding the availability of open-source code. |
| Open Datasets | No | The paper is theoretical and discusses 'a training set of labeled examples, drawn i.i.d. from a distribution D' in a conceptual manner, but it does not specify any particular public dataset or provide access information for any data used for training. |
| Dataset Splits | No | The paper does not explicitly mention or describe a validation dataset split. It focuses on theoretical generalization from a training set to an underlying distribution. |
| Hardware Specification | No | The paper does not specify any hardware used for experiments, as it is a theoretical work. |
| Software Dependencies | No | The paper does not provide specific software dependencies or version numbers. |
| Experiment Setup | No | The paper is theoretical and focuses on algorithm design and proofs, thus it does not provide details on specific experimental setup, hyperparameters, or training configurations. |