Max-Margin Works while Large Margin Fails: Generalization without Uniform Convergence

Authors: Margalit Glasgow, Colin Wei, Mary Wootters, Tengyu Ma

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our main contribution is proving novel generalization bounds in two such settings, one linear, and one non-linear. We prove a new type of margin bound showing that above a certain signal-to-noise threshold, any near-max-margin classifier will achieve almost no test loss in these two settings. To our knowledge, our results are the first instance of theoretically proving generalization in a neural network setting (that is not in the NTK regime) where UC provably fails.
Researcher Affiliation Academia Margalit Glasgow, Colin Wei, Mary Wootters & Tengyu Ma Department of Computer Science Stanford University Stanford, CA 94305, USA {mglasgow,colinwei,marykw,tengyuma}@stanford.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets No The paper defines theoretical data distributions (e.g., Dµ,σ,d, Dµ1,µ2,σ,d) for its analysis, but does not mention the use of any publicly available or open dataset for empirical evaluation or provide access information for such datasets as it is a theoretical paper.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with data splits, so it does not provide specific dataset split information for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not report on experiments requiring hardware, thus no specific hardware details are provided.
Software Dependencies No The paper is theoretical and does not report on experiments or code implementation details, thus no specific ancillary software details with version numbers are provided.
Experiment Setup No The paper is theoretical and does not involve empirical experiments, thus no specific experimental setup details like hyperparameter values or training configurations are provided.