Generalization Bounds for Uniformly Stable Algorithms

Authors: Vitaly Feldman, Jan Vondrak

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We substantially improve generalization bounds for uniformly stable algorithms without making any additional assumptions. First, we show that the bound in this setting is O( p(γ + 1/n) log(1/δ)) with probability at least 1 δ. In addition, we prove a tight bound of O(γ2 + 1/n) on the second moment of the estimation error. The best previous bound on the second moment is O(γ + 1/n). Our proofs are based on new analysis techniques and our results imply substantially stronger generalization guarantees for several well-studied algorithms.
Researcher Affiliation Collaboration Vitaly Feldman Google Brain Jan Vondrak Stanford University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It focuses on theoretical proofs and mathematical derivations.
Open Source Code No The paper does not provide any concrete access to source code, nor does it state that code will be made available.
Open Datasets No This is a theoretical paper focused on generalization bounds and proofs. It does not involve the use of specific datasets for training or experimentation, nor does it provide concrete access information for any dataset.
Dataset Splits No This is a theoretical paper and does not involve empirical validation on datasets, therefore, no training/test/validation splits are described.
Hardware Specification No This is a theoretical paper. No hardware specifications are mentioned as part of any experimental setup.
Software Dependencies No This is a theoretical paper. No specific software dependencies with version numbers are mentioned.
Experiment Setup No This is a theoretical paper and does not describe any experimental setup details such as hyperparameters or training configurations.