No-Regret Learning in Partially-Informed Auctions

Authors: Wenshuo Guo, Michael Jordan, Ellen Vitercik

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We formalize this problem as an online learning task where the goal is to have low regret with respect to a myopic oracle that has perfect knowledge of the distribution over items and the seller s masking function. When the distribution over items is known to the buyer and the mask is a Sim Hash function mapping Rd to {0, 1}ℓ, our algorithm has regret O((Tdℓ) 1/2). In a fully agnostic setting when the mask is an arbitrary function mapping to a set of size n and the prices are stochastic, our algorithm has regret O((Tn) 1/2).
Researcher Affiliation Academia 1Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, USA 2Department of Statistics, University of California, Berkeley, USA 3Department of Management Science & Engineering and Department of Computer Science, Stanford University, USA.
Pseudocode Yes Algorithm 1 Explore-then-Commit (Known Distribution); Algorithm 2 Exp4.VC with an unknown distribution
Open Source Code No The paper does not provide any explicit statement or link regarding the public availability of its source code.
Open Datasets No This paper is theoretical and does not involve experimental training on a specific, publicly available dataset.
Dataset Splits No This is a theoretical paper and does not describe experimental validation involving dataset splits.
Hardware Specification No The paper is theoretical and does not describe any specific hardware used for experiments. No hardware specifications were found.
Software Dependencies No The paper refers to specific algorithms (e.g., Exp4.VC, Lovász & Vempala's Integration Algorithm) but does not list any software dependencies with version numbers.
Experiment Setup No This is a theoretical paper and does not provide details about an experimental setup, hyperparameters, or training configurations.