What Happens after SGD Reaches Zero Loss? --A Mathematical Framework

Authors: Zhiyuan Li, Tianhao Wang, Sanjeev Arora

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The current paper gives a general framework for such analysis by adapting ideas from Katzenberger (1991). It allows in principle a complete characterization for the regularization effect of SGD around such manifold i.e., the implicit bias using a stochastic differential equation (SDE) describing the limiting dynamics of the parameters, which is determined jointly by the loss function and the noise covariance. This yields some new results: (1) a global analysis of the implicit bias valid for η 2 steps, in contrast to the local analysis of Blanc et al. (2020) that is only valid for η 1.6 steps and (2) allowing arbitrary noise covariance. As an application, we show with arbitrary large initialization, label noise SGD can always escape the kernel regime and only requires O(κ ln d) samples for learning an κ-sparse overparametrized linear model in Rd (Woodworth et al., 2020), while GD initialized in the kernel regime requires Ω(d) samples. This upper bound is minimax optimal and improves the previous e O(κ2) upper bound (Hao Chen et al., 2020).
Researcher Affiliation Academia Zhiyuan Li Department of Computer Science Princeton University zhiyuanli@cs.princeton.edu Tianhao Wang Department of Statistics and Data Science Yale University tianhao.wang@yale.edu Sanjeev Arora Department of Computer Science Princeton University arora@cs.princeton.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements or links about open-sourcing the code for the described methodology.
Open Datasets No The paper is theoretical and analyzes models under assumptions like "training data are sampled from either i.i.d. Gaussian or Boolean distribution" (Section 6, Theorem 6.1). It does not specify or link to any publicly available datasets used for empirical training.
Dataset Splits No The paper is theoretical and does not describe actual experiments, therefore it does not provide specific details about training, validation, or test dataset splits.
Hardware Specification No The paper focuses on theoretical analysis and does not describe any experimental setup that would require hardware specifications.
Software Dependencies No The paper focuses on theoretical analysis and does not describe any experimental setup that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with hyperparameters or training configurations.