Robust Learning of Optimal Auctions
Authors: Wenshuo Guo, Michael Jordan, Emmanouil Zampetakis
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We prove tight upper bounds on the revenue we can obtain with a corrupted distribution under a population model, for both regular valuation distributions and distributions with monotone hazard rate (MHR). We then propose new algorithms that, given only an approximate distribution for the bidder s valuation, can learn a mechanism whose revenue is nearly optimal simultaneously for all true distributions that are α-close to the original distribution in Kolmogorov-Smirnov distance. The proposed algorithms operate beyond the setting of bounded distributions that have been studied in prior works, and are guaranteed to obtain a fraction 1 O(α) of the optimal revenue under the true distribution when the distributions are MHR. Moreover, they are guaranteed to yield at least a fraction 1 O( α) of the optimal revenue when the distributions are regular. We prove that these upper bounds cannot be further improved, by providing matching lower bounds. Lastly, we derive sample complexity upper bounds for learning a near-optimal auction for both MHR and regular distributions. |
| Researcher Affiliation | Academia | Wenshuo Guo University of California, Berkeley wguo@cs.berkeley.edu Michael I. Jordan University of California, Berkeley jordan@cs.berkeley.edu Manolis Zampetakis University of California, Berkeley mzampet@berkeley.edu |
| Pseudocode | Yes | Algorithm 1 Robust Myerson Auction in the Population Model; Algorithm 2 Robust Empirical Myerson Auction |
| Open Source Code | No | The paper does not provide any links to open-source code or explicitly state that code for the methodology is available. The 'Reproducibility Statement' section indicates 'N/A' for code. |
| Open Datasets | No | The paper focuses on theoretical models and algorithms for learning optimal auctions from 'samples' which are abstract representations of data from distributions. It does not use or specify any publicly available datasets for empirical training or evaluation. |
| Dataset Splits | No | The paper is theoretical and does not describe empirical experiments with dataset splits (training, validation, test). |
| Hardware Specification | No | The paper does not specify any hardware used for experiments. The 'Reproducibility Statement' section indicates 'N/A' for hardware. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers. The 'Reproducibility Statement' section indicates 'N/A' for code. |
| Experiment Setup | No | The paper is theoretical and describes algorithms and proofs, not an empirical experimental setup with hyperparameters or system-level training settings. |