Stability and Generalization for Markov Chain Stochastic Gradient Methods

Authors: Puyu Wang, Yunwen Lei, Yiming Ying, Ding-Xuan Zhou

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we provide a comprehensive generalization analysis of MC-SGMs for both minimization and minimax problems through the lens of algorithmic stability in the framework of statistical learning theory.For empirical risk minimization (ERM) problems, we establish the optimal excess population risk bounds for both smooth and non-smooth cases by introducing on-average argument stability.For minimax problems, we develop a quantitative connection between on-average argument stability and generalization error which extends the existing results for uniform stability [38].We further develop the first nearly optimal convergence rates for convex-concave problems both in expectation and with high probability, which, combined with our stability results, show that the optimal generalization bounds can be attained for both smooth and non-smooth cases.To the best of our knowledge, this is the first generalization analysis of SGMs when the gradients are sampled from a Markov process.
Researcher Affiliation Academia 1 Liu Bie Ju Centre for Mathematical Sciences, City University of Hong Kong 2 School of Computer Science, University of Birmingham 3 Department of Mathematics and Statistics, State University of New York at Albany 4 School of Mathematics and Statistics, University of Sydney
Pseudocode No The paper describes algorithms like SGD and SGDA with Markov sampling in textual form but does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper is theoretical and does not mention releasing specific source code for its proposed methodology. The checklist item 'Did you include any new assets either in the supplemental material or as a URL? [Yes]' is ambiguous and likely refers to theoretical constructs or existing assets rather than implementation code for the paper's novel contributions, especially as the paper states 'If you ran experiments... [N/A]'.
Open Datasets No The paper is theoretical and does not involve experiments with datasets; therefore, it does not specify any dataset availability or provide access information.
Dataset Splits No The paper is theoretical and does not involve experiments with datasets; therefore, it does not provide details on training, validation, or test dataset splits.
Hardware Specification No The paper explicitly states 'If you ran experiments... [N/A]', indicating no experiments were conducted, and therefore no hardware specifications are provided.
Software Dependencies No The paper explicitly states 'If you ran experiments... [N/A]', indicating no experiments were conducted, and therefore no specific software dependencies are listed.
Experiment Setup No The paper explicitly states 'If you ran experiments... [N/A]', indicating no experiments were conducted, and therefore no experimental setup details are provided.