Sharper Generalization Bounds for Pairwise Learning

Authors: Yunwen Lei, Antoine Ledent, Marius Kloft

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we provide a refined stability analysis by developing generalization bounds which can be n-times faster than the existing results, where n is the sample size. This implies excess risk bounds of the order O(n 1/2) (up to a logarithmic factor) for both regularized risk minimization and stochastic gradient descent. We also introduce a new on-average stability measure to develop optimistic bounds in a low noise setting. We apply our results to ranking and metric learning, and clearly show the advantage of our generalization bounds over the existing analysis.
Researcher Affiliation Academia Yunwen Lei1,2 Antoine Ledent2 Marius Kloft2 1School of Computer Science, University of Birmingham, Birmingham B15 2TT, United Kingdom 2Department of Computer Science, TU Kaiserslautern, Kaiserslautern 67653, Germany
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code No The paper does not provide concrete access to source code for the methodology described. No repository link or explicit code release statement is found.
Open Datasets No The paper is theoretical and does not conduct new experiments requiring specific public datasets. It refers to general "training dataset S = {z1, . . . , zn}" but does not provide concrete access information for any specific dataset.
Dataset Splits No The paper is theoretical and does not conduct new experiments requiring dataset splits. Therefore, it does not provide specific dataset split information.
Hardware Specification No The paper is theoretical and does not report on experiments requiring specific hardware. Therefore, it does not provide specific hardware details used for running experiments.
Software Dependencies No The paper is theoretical and does not report on experiments requiring specific software. Therefore, it does not provide specific ancillary software details with version numbers.
Experiment Setup No The paper is theoretical and does not report on experiments. Therefore, it does not provide concrete hyperparameter values, training configurations, or system-level settings.