Incentivizing Truthfulness Through Audits in Strategic Classification

Authors: Andrew Estornell, Sanmay Das, Yevgeniy Vorobeychik5347-5354

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We show that in the threshold allocation setting an optimal policy audits uniformly at random all agents who are above the threshold, with special consideration for those who are either obviously lying or telling the truth. Although this policy is in general hard to compute, we present sufficient conditions under which it is tractable. In the top-k setting, we prove that auditing all agents who receive the scarce resource uniformly at random (again, modulo special treatment of agents who are either certainly truthful or dishonest) yields an additive approximation bound, although the problem is hard in general. Furthermore, we show that this audit policy is optimal if we consider dominant strategy incentive compatibility as a solution concept instead of ε-BNIC. Surprisingly, the verification problem is even harder: determining if any audit policy can incentivize truthful reporting is #P-hard even for a uniform prior over features and only two agents. However, we give sufficient conditions under which verification becomes tractable in the threshold setting for both piecewise linear and logistic scoring functions.
Researcher Affiliation Academia Andrew Estonrell1, Sanmay Das2, Yevgeniy Vorobeychik1 1 Computer Science & Engineering, Washington University in St. Louis 2 Computer Science, George Mason University
Pseudocode No The paper describes policies like UNIFORM and UNIFORM-K, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not mention releasing any source code or provide links to a code repository for the methodology described.
Open Datasets No The paper is theoretical and discusses properties of 'prior distribution D' or 'well-behaved' distributions h. It cites other papers that might use datasets (e.g., Kube, Das, and Fowler 2019; Chouldechova et al. 2018), but it does not use or provide access information for any specific dataset for its own analysis.
Dataset Splits No The paper is theoretical and does not conduct experiments with data, thus no dataset split information for training, validation, or testing is provided.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or the specific hardware used to run experiments.
Software Dependencies No The paper is theoretical and does not mention any specific software dependencies or versions required to replicate its work.
Experiment Setup No The paper is theoretical and focuses on mathematical proofs and policy design, not empirical experimentation. Therefore, it does not provide details about experimental setup, hyperparameters, or training configurations.