Repeated Contextual Auctions with Strategic Buyers

Authors: Kareem Amin, Afshin Rostamizadeh, Umar Syed

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We give the first algorithm attaining sublinear ( O(T 2/3)) regret in the contextual setting against a surplus-maximizing buyer. We also extend this result to repeated second-price auctions with multiple buyers. The rest of the paper is organized as follows. We first introduce a linear model by which values vt are derived from contexts xt. We then demonstrate an algorithm based on stochastic gradient descent (SGD) which achieves sublinear regret against an truthful buyer (one that accepts price pt iff pt vt on every round t). The analysis for the truthful buyer uses prexisting high probability bounds for SGD when minimizing strongly convex functions [15]. Our main result requires an extension of this analysis to cases in which incorrect gradients are occasionally observed. This lets us study a buyer that is allowed to best-respond to our algorithm, possibly rejecting offers that the truthful buyer would not, in order to receive better offers on future rounds. We also adapt our algorithm to non-linear settings via a kernelized version of the algorithm. Finally, we extend our results to second-price auctions with multiple buyers.
Researcher Affiliation Collaboration Kareem Amin University of Pennsylvania akareem@cis.upenn.edu Afshin Rostamizadeh Google Research rostami@google.com Umar Syed Google Research usyed@google.com
Pseudocode Yes Algorithm 1 LEAP algorithm Algorithm 2 Kernelized LEAP algorithm
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the described methodology.
Open Datasets No This is a theoretical paper that introduces algorithms and provides mathematical proofs and regret bounds. It models context vectors 'xt' drawn from a fixed distribution 'D' but does not conduct experiments on a publicly available or open dataset.
Dataset Splits No This paper is theoretical and does not involve empirical experiments with dataset splits for training, validation, or testing.
Hardware Specification No This paper is theoretical and does not describe any hardware specifications used for running experiments.
Software Dependencies No This paper is theoretical and does not specify any software dependencies with version numbers for reproducibility.
Experiment Setup No This paper is theoretical and does not include details on experimental setup such as hyperparameters or system-level training settings.