Submultiplicative Glivenko-Cantelli and Uniform Convergence of Revenues

Authors: Noga Alon, Moshe Babaioff, Yannai A. Gonczarowski, Yishay Mansour, Shay Moran, Amir Yehudayoff

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work we derive a variant of the classic Glivenko-Cantelli Theorem, which asserts uniform convergence of the empirical Cumulative Distribution Function (CDF) to the CDF of the underlying distribution. Our variant allows for tighter convergence bounds for extreme values of the CDF. We apply our bound in the context of revenue learning, which is a well-studied problem in economics and algorithmic game theory. We derive sample-complexity bounds on the uniform convergence rate of the empirical revenues to the true revenues, assuming a bound on the kth moment of the valuations, for any (possibly fractional) k > 1. For uniform convergence in the limit, we give a complete characterization and a zero-one law: if the first moment of the valuations is finite, then uniform convergence almost surely occurs; conversely, if the first moment is infinite, then uniform convergence almost never occurs.
Researcher Affiliation Collaboration Noga Alon Tel Aviv University, Israel and Microsoft Research nogaa@tau.ac.il Moshe Babaioff Microsoft Research moshe@microsoft.com Yannai A. Gonczarowski The Hebrew University of Jerusalem, Israel and Microsoft Research yannai@gonch.name Yishay Mansour Tel Aviv University, Israel and Google Research, Israel mansour@tau.ac.il Shay Moran Institute for Advanced Study, Princeton shaymoran1@gmail.com Amir Yehudayoff Technion IIT, Israel amir.yehudayoff@gmail.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement or link providing concrete access to source code for the methodology described.
Open Datasets No The paper does not mention using any specific public or open datasets for training or evaluation.
Dataset Splits No The paper is theoretical and does not discuss dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not mention any specific hardware used for experiments.
Software Dependencies No The paper is theoretical and does not specify any software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not include details about an experimental setup, hyperparameters, or training configurations.