Towards Problem-dependent Optimal Learning Rates

Authors: Yunbei Xu, Assaf Zeevi

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study problem-dependent rates, i.e., generalization errors that scale tightly with the variance or the effective loss at the "best hypothesis." In this paper we propose a new framework based on a "uniform localized convergence" principle. We provide the first (moment-penalized) estimator that achieves the optimal variance-dependent rate for general "rich" classes; we also establish improved loss-dependent rate for standard empirical risk minimization.
Researcher Affiliation Academia Yunbei Xu Columbia University New York, NY 10027 yunbei.xu@gsb.columbia.edu Assaf Zeevi Columbia University New York, NY 10025 assaf@gsb.columbia.edu
Pseudocode No The paper describes "Strategy 1" and "Strategy 2" as procedural steps within the main text, but these are not formatted as distinct pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology or links to code repositories.
Open Datasets No The paper is theoretical and does not describe experiments or use datasets, so no dataset access information is provided.
Dataset Splits No The paper is theoretical and does not describe experiments, so no specific dataset split information is provided.
Hardware Specification No The paper is theoretical and does not describe experiments, thus no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe experiments, thus no specific software dependencies with version numbers are listed.
Experiment Setup No The paper is theoretical and does not describe experiments, thus no specific experimental setup details like hyperparameters or training configurations are provided.