Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Towards Problem-dependent Optimal Learning Rates

Authors: Yunbei Xu, Assaf Zeevi

NeurIPS 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study problem-dependent rates, i.e., generalization errors that scale tightly with the variance or the effective loss at the "best hypothesis." In this paper we propose a new framework based on a "uniform localized convergence" principle. We provide the first (moment-penalized) estimator that achieves the optimal variance-dependent rate for general "rich" classes; we also establish improved loss-dependent rate for standard empirical risk minimization.
Researcher Affiliation Academia Yunbei Xu Columbia University New York, NY 10027 EMAIL Assaf Zeevi Columbia University New York, NY 10025 EMAIL
Pseudocode No The paper describes "Strategy 1" and "Strategy 2" as procedural steps within the main text, but these are not formatted as distinct pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology or links to code repositories.
Open Datasets No The paper is theoretical and does not describe experiments or use datasets, so no dataset access information is provided.
Dataset Splits No The paper is theoretical and does not describe experiments, so no specific dataset split information is provided.
Hardware Specification No The paper is theoretical and does not describe experiments, thus no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe experiments, thus no specific software dependencies with version numbers are listed.
Experiment Setup No The paper is theoretical and does not describe experiments, thus no specific experimental setup details like hyperparameters or training configurations are provided.