On Infinite Separations Between Simple and Optimal Mechanisms
Authors: Alexandros Psomas, Ariel Schvartzman Cohenca, S. Weinberg
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We consider a revenue-maximizing seller with k heterogeneous items for sale to a single additive buyer, whose values are drawn from a known, possibly correlated prior D. It is known that there exist priors D such that simple mechanisms those with bounded menu complexity extract an arbitrarily small fraction of the optimal revenue ([BCKW15, HN19]). This paper considers the opposite direction: given a correlated distribution D witnessing an infinite separation between simple and optimal mechanisms, what can be said about D? [HN19] provides a framework for constructing such D: it takes as input a sequence of k-dimensional vectors satisfying some geometric property, and produces a D witnessing an infinite gap. Our first main result establishes that this framework is without loss: every D witnessing an infinite separation could have resulted from this framework. An earlier version of their work provided a more streamlined framework [HN13]. Our second main result establishes that this restrictive framework is not tight. That is, we provide an instance D witnessing an infinite gap, but which provably could not have resulted from the restrictive framework. As a corollary, we discover a new kind of mechanism which can witness these infinite separations on instances where the previous aligned mechanisms do not. |
| Researcher Affiliation | Academia | Department of Computer Science, Purdue University, apsomas@cs.purdue.edu. Supported in part by an NSF CAREER award CCF-2144208, a Google Research Scholar Award, and by the Algorand Centres of Excellence program managed by Algorand Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of Algorand Foundation. Center for Discrete Mathematics and Theoretical Computer Science (DIMACS), schvartman.ariel@gmail.com. Research conducted while at DIMACS, Rutgers University and supported by the National Science Foundation under grant number CCF-1445755. Department of Computer Science, Princeton University, smweinberg@princeton.edu. Supported by NSF CCF-1717899 |
| Pseudocode | No | The paper contains mathematical definitions and proofs but does not include pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about releasing source code or links to a code repository for the described methodology. The 'If you ran experiments' section explicitly states N/A for code. |
| Open Datasets | No | This is a theoretical paper that does not use or analyze datasets empirically. The 'If you ran experiments' section explicitly states N/A for data. |
| Dataset Splits | No | This is a theoretical paper that does not use or analyze datasets empirically, and therefore does not discuss dataset splits. The 'If you ran experiments' section explicitly states N/A for data. |
| Hardware Specification | No | This is a theoretical paper and does not mention any hardware used for experiments. The 'If you ran experiments' section explicitly states N/A for compute resources. |
| Software Dependencies | No | This is a theoretical paper and does not mention specific software dependencies with version numbers for experimental reproducibility. The 'If you ran experiments' section explicitly states N/A for software details. |
| Experiment Setup | No | This is a theoretical paper and does not describe an experimental setup with hyperparameters or training settings. The 'If you ran experiments' section explicitly states N/A for training details. |