Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias

Authors: Kaifeng Lyu, Zhiyuan Li, Runzhe Wang, Sanjeev Arora

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The generalization mystery of overparametrized deep nets has motivated efforts to understand how gradient descent (GD) converges to low-loss solutions that generalize well. Real-life neural networks are initialized from small random values and trained with cross-entropy loss for classification (unlike the "lazy" or "NTK" regime of training where analysis was more successful), and a recent sequence of results (Lyu and Li, 2020; Chizat and Bach, 2020; Ji and Telgarsky, 2020a) provide theoretical evidence that GD may converge to the "max-margin" solution with zero loss, which presumably generalizes well. However, the global optimality of margin is proved only in some settings where neural nets are infinitely or exponentially wide. The current paper is able to establish this global optimality for two-layer Leaky Re LU nets trained with gradient flow on linearly separable and symmetric data, regardless of the width. The analysis also gives some theoretical justification for recent empirical findings (Kalimeris et al., 2019) on the so-called simplicity bias of GD towards linear or other "simple" classes of solutions, especially early in training.
Researcher Affiliation Academia Kaifeng Lyu Princeton University klyu@cs.princeton.edu Zhiyuan Li Princeton University zhiyuanli@cs.princeton.edu Runzhe Wang Princeton University runzhew@princeton.edu Sanjeev Arora Princeton University arora@cs.princeton.edu
Pseudocode No The paper does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any statement or link for open-sourcing the code for the described methodology.
Open Datasets No The paper is theoretical and does not conduct experiments involving datasets or their public availability.
Dataset Splits No The paper is theoretical and does not involve specific training/validation/test dataset splits for experimental reproduction.
Hardware Specification No The paper is theoretical and does not describe any hardware used for running experiments.
Software Dependencies No The paper is theoretical and does not list any specific software dependencies with version numbers for experimental reproducibility.
Experiment Setup No The paper is theoretical and does not provide details about an experimental setup, such as hyperparameters or training settings.