A Sample Complexity Measure with Applications to Learning Optimal Auctions

Authors: Vasilis Syrgkanis

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We introduce a new sample complexity measure, which we refer to as split-sample growth rate. For any hypothesis H and for any sample S of size m, the split-sample growth rate ˆτH(m) counts how many different hypotheses can empirical risk minimization output on any sub-sample of S of size m/2. We show that the expected generalization error is upper bounded by O q log(ˆτH(2m)). This work solely focuses on sample complexity and not computational efficiency and thus is more related to [4, 9, 10, 2]. The latter work, uses tools from supervised learning, such as pseudodimension [12] (a variant of VC dimension for real-valued functions), compression bounds [8] and Rademacher complexity [12, 14] to bound the sample complexity of simple auction classes. Our work introduces a new measure of sample complexity, which is a strengthening the Rademacher complexity analysis and hence could also be of independent interest outside the scope of the sample complexity of optimal auctions.
Researcher Affiliation Industry Vasilis Syrgkanis Microsoft Research vasy@microsoft.com
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing open-source code or a link to a code repository for the described methodology.
Open Datasets No The paper is theoretical and does not describe experiments using datasets, thus no information about public datasets for training is provided.
Dataset Splits No The paper is theoretical and does not describe experiments, thus no information about training/validation/test splits is provided.
Hardware Specification No The paper is theoretical and does not describe experiments that would require specific hardware, thus no hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not specify any software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe empirical experiments, thus no experimental setup details like hyperparameters or training configurations are provided.