Algorithmic Stability and Hypothesis Complexity

Authors: Tongliang Liu, Gábor Lugosi, Gergely Neu, Dacheng Tao

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We introduce a notion of algorithmic stability of learning algorithms that we term argument stability that captures stability of the hypothesis output by the learning algorithm in the normed space of functions from which hypotheses are selected. The main result of the paper bounds the generalization error of any learning algorithm in terms of its argument stability. The bounds are based on martingale inequalities in the Banach space to which the hypotheses belong. We apply the general bounds to bound the performance of some learning algorithms based on empirical risk minimization and stochastic gradient descent.
Researcher Affiliation Academia 1UBTech Sydney AI Institute, School of IT, FEIT, The University of Sydney, Australia 2Department of Economics and Business, Pompeu Fabra University, Barcelona, Spain 3ICREA, Pg. Llus Companys 23, 08010 Barcelona, Spain 4Barcelona Graduate School of Economics 5AI group, DTIC, Universitat Pompeu Fabra, Barcelona, Spain.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets No The paper is theoretical and does not use or link to any specific public datasets for empirical evaluation. It refers to 'training sample' in a theoretical context.
Dataset Splits No The paper is theoretical and does not conduct empirical experiments, thus it does not provide specific dataset split information.
Hardware Specification No The paper is theoretical and does not report on computational experiments, thus no hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not describe any software implementation details or dependencies.
Experiment Setup No The paper is theoretical and does not describe any specific experimental setup details, hyperparameters, or training configurations.