Learning Auctions with Robust Incentive Guarantees

Authors: Jacob D. Abernethy, Rachel Cummings, Bhuvesh Kumar, Sam Taggart, Jamie H. Morgenstern

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the problem of learning Bayesian-optimal revenue-maximizing auctions. The classical approach to maximizing revenue requires a known prior distribution on the demand of the bidders, although recent work has shown how to replace the knowledge of a prior distribution with a polynomial sample. However, in an online setting, when buyers can participate in multiple rounds, standard learning techniques are susceptible to strategic overfitting: bidders can improve their longterm wellbeing by manipulating the trajectory of the learning algorithm through bidding. For example, they may be able to strategically adjust their behavior in earlier rounds to achieve lower, more favorable future prices. Such non-truthful behavior can hinder learning and harm revenue. In this paper, we combine tools from differential privacy, mechanism design, and sample complexity to give a repeated auction that (1) learns bidder demand from past data, (2) is approximately revenue-optimal, and (3) strategically robust, as it incentivizes bidders to behave truthfully. Our two main results are the first computationally tractable algorithms for learning nearly-optimal Bayesian revenue-maximizing auctions with an approximate truthfulness guarantee. We first give a learning algorithm for which there exists an approximate equilibrium in which all bidders report their true values. Along the way to this result, we provide several useful technical lemmas for comparing the revenue of mechanisms on two similar distributions, and comparing the revenue of two similar mechanisms on any fixed distribution. These may find future applications of independent interest beyond this work. Second, under an assumption limiting changes in buyer behavior from round to round, we construct a mechanism for which there is an exact equilibrium where every bidder bids within a small constant of their value, which also achieves a nearly optimal revenue guarantee.
Researcher Affiliation Academia Jacob Abernethy Georgia Tech prof@gatech.edu Rachel Cummings Georgia Tech rachelc@gatech.edu Bhuvesh Kumar Georgia Tech bhuvesh@gatech.edu Jamie Morgenstern Georgia Tech jamiemmt.cs@gatech.edu Samuel Taggart Oberlin College sam.taggart@oberlin.edu
Pseudocode Yes Algorithm 1: Utility-Approximate BIC Online Auction Parameters: discretization β, privacy ϵ, upper bound on support h, num. of rounds T Initialize: H i,0 Uniform(0, h) for i = 1, , n for t = 1, , T do Receive bid profile vt = (v1,t, . . . , vn,t), rounded down to integer multiple of β Run Myerson (Def. 4) with H t 1 as prior and vt as bid for allocations/payments. for i = 1, . . . , n do Update H i,t via two-fold tree aggregation (Algorithm 3), giving as input vi,t end end
Open Source Code No The paper does not provide concrete access to source code for the methodology described, such as a specific repository link, an explicit code release statement, or code in supplementary materials.
Open Datasets No The paper does not provide concrete access information for a publicly available or open dataset. It focuses on theoretical models and distributions rather than empirical data.
Dataset Splits No The paper does not describe any specific dataset splits (training, validation, test) as it focuses on theoretical analysis rather than empirical experimentation.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments, as it is a theoretical work.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, as it is a theoretical paper and does not describe implementation specifics.
Experiment Setup No The paper does not contain specific experimental setup details, hyperparameter values, or training configurations, as it focuses on theoretical development.