Learning Optimal Contracts: How to Exploit Small Action Spaces

Authors: Francesco Bacchiocchi, Matteo Castiglioni, Alberto Marchesi, Nicola Gatti

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study principal-agent problems in which a principal commits to an outcome-dependent payment scheme called contract in order to induce an agent to take a costly, unobservable action leading to favorable outcomes. We design an algorithm that learns an approximately-optimal contract with high probability in a number of rounds polynomial in the size of the outcome space, when the number of actions is constant. Our algorithm solves an open problem by Zhu et al. (2023). Moreover, it can also be employed to provide a e O(T 4/5) regret bound in the related online learning setting in which the principal aims at maximizing their cumulative utility over rounds, considerably improving previously-known regret bounds.
Researcher Affiliation Academia Francesco Bacchiocchi, Matteo Castiglioni, Alberto Marchesi & Nicola Gatti Politecnico di Milano, Milan, Italy {name.surname}@polimi.it
Pseudocode Yes Algorithm 1 Discover-and-Cover; Algorithm 2 Action-Oracle; Algorithm 3 Try-Cover; Algorithm 4 Find-Contract; Algorithm 5 No-regret algorithm; Algorithm 6 Find-HS
Open Source Code No The paper does not contain any explicit statement about providing open-source code for the described methodology or a link to a code repository.
Open Datasets No The paper is theoretical and does not involve empirical experiments with datasets.
Dataset Splits No The paper is theoretical and does not involve empirical experiments requiring dataset splits.
Hardware Specification No The paper is theoretical and does not describe any experiments that would require specific hardware specifications.
Software Dependencies No The paper is theoretical and does not mention any specific software or library versions used for implementation.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with hyperparameters or training configurations.