Incentive-Aware PAC Learning

Authors: Hanrui Zhang, Vincent Conitzer5797-5804

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study PAC learning in the presence of strategic manipulation, where data points may modify their features in certain predefined ways in order to receive a better outcome. We show that the vanilla ERM principle fails to achieve any nontrivial guarantee in this context. Instead, we propose an incentive-aware version of the ERM principle which has asymptotically optimal sample complexity. We then focus our attention on incentive-compatible classifiers, which provably prevent any kind of strategic manipulation. We give a sample complexity bound that is, curiously, independent of the hypothesis class, for the ERM principle restricted to incentivecompatible classifiers. This suggests that incentive compatibility alone can act as an effective means of regularization. We further show that it is without loss of generality to consider only incentive-compatible classifiers when opportunities for strategic manipulation satisfy a transitivity condition. As a consequence, in such cases, our hypothesis-classindependent sample complexity bound applies even without incentive compatibility. Our results set the foundations of incentive-aware PAC learning.
Researcher Affiliation Academia Hanrui Zhang, Vincent Conitzer Duke University hrzhang@cs.duke.edu, conitzer@cs.duke.edu
Pseudocode No The paper describes algorithms conceptually but does not include any structured pseudocode or algorithm blocks in the provided text.
Open Source Code No The paper does not provide any statements about releasing source code for the described methodology, nor does it include any links to code repositories or supplementary materials.
Open Datasets No The paper is theoretical and discusses samples within a mathematical context, but it does not specify or provide access information for any publicly available or open datasets used in experiments.
Dataset Splits No The paper is theoretical and does not describe empirical experiments with specific data splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any specific hardware used for running experiments.
Software Dependencies No The paper is theoretical and does not mention any specific software dependencies with version numbers required for replication.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with specific hyperparameters or training configurations.