Maximizing Welfare with Incentive-Aware Evaluation Mechanisms

Authors: Nika Haghtalab, Nicole Immorlica, Brendan Lucier, Jack Z. Wang

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Motivated by applications such as college admission and insurance rate determination, we propose an evaluation problem where the inputs are controlled by strategic individuals who can modify their features at a cost. A learner can only partially observe the features, and aims to classify individuals with respect to a quality score. The goal is to design an evaluation mechanism that maximizes the overall quality score, i.e., welfare, in the population, taking any strategic updating into account. We further study the algorithmic aspect of finding the welfare maximizing evaluation mechanism under two specific settings in our model. When scores are linear and mechanisms use linear scoring rules on the observable features, we show that the optimal evaluation mechanism is an appropriate projection of the quality score. When mechanisms must use linear thresholds, we design a polynomial time algorithm with a (1/4)-approximation guarantee when the underlying feature distribution is sufficiently smooth and admits an oracle for finding dense regions. We extend our results to settings where the prior distribution is unknown and must be learned from samples.
Researcher Affiliation Collaboration Nika Haghtalab1 , Nicole Immorlica2 , Brendan Lucier2 and Jack Z. Wang1 1Cornell University 2Microsoft Research {nika,jackzwang}@cs.cornell.edu, {nicimm,brlucier}@microsoft.com
Pseudocode Yes Algorithm 1 (1/4 ϵ) Approximation for G0-1
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the methodology described.
Open Datasets No The paper is theoretical and operates on abstract 'distributions D' rather than specific datasets. It does not provide concrete access information for a publicly available dataset.
Dataset Splits No The paper is theoretical and does not describe empirical experiments with dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any specific hardware used for running experiments.
Software Dependencies No The paper is theoretical and does not mention specific software dependencies with version numbers needed for replication.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with specific hyperparameters or training configurations.