Stability and Generalization of Learning Algorithms that Converge to Global Optima

Authors: Zachary Charles, Dimitris Papailiopoulos

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We establish novel generalization bounds for learning algorithms that converge to global minima. We derive black-box stability results that only depend on the convergence of a learning algorithm and the geometry around the minimizers of the empirical risk function. The results are shown for non-convex loss functions satisfying the Polyak-Łojasiewicz (PL) and the quadratic growth (QG) conditions, which we show arise for 1-layer neural networks with leaky Re LU activations and deep neural networks with linear activations. We use our results to establish the stability of first-order methods such as stochastic gradient descent (SGD), gradient descent (GD), randomized coordinate descent (RCD), and the stochastic variance reduced gradient method (SVRG), in both the PL and the strongly convex setting. Our results match or improve state-of-the-art generalization bounds and can easily extend to similar optimization algorithms.
Researcher Affiliation Academia Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Wisconsin, USA.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets No The paper is theoretical and does not conduct empirical experiments. Thus, no specific dataset or access information for a publicly available dataset used in experiments is provided.
Dataset Splits No The paper is theoretical and does not conduct empirical experiments. Thus, no specific dataset split information for validation is provided.
Hardware Specification No The paper is theoretical and does not conduct empirical experiments, therefore no hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not conduct empirical experiments. Thus, no specific ancillary software details with version numbers (e.g., libraries, frameworks used for running experiments) are provided.
Experiment Setup No The paper is theoretical and does not conduct empirical experiments, therefore no specific experimental setup details (like hyperparameters or training configurations) are provided.