Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

A Universal Growth Rate for Learning with Smooth Surrogate Losses

Authors: Anqi Mao, Mehryar Mohri, Yutao Zhong

NeurIPS 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper presents a comprehensive analysis of the growth rate of H-consistency bounds (and excess error bounds) for various surrogate losses used in classification. We prove a square-root growth rate near zero for smooth margin-based surrogate losses in binary classification, providing both upper and lower bounds under mild assumptions. This result also translates to excess error bounds.
Researcher Affiliation Collaboration Anqi Mao Courant Institute New York, NY 10012 EMAIL Mehryar Mohri Google Research & CIMS New York, NY 10011 EMAIL Yutao Zhong Courant Institute New York, NY 10012 EMAIL
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include experiments requiring code.
Open Datasets No The paper does not include experiments. As no experiments are conducted, no dataset is used for training.
Dataset Splits No The paper does not include experiments. As no experiments are conducted, no dataset split information is provided.
Hardware Specification No The paper does not include experiments. As no experiments are conducted, no hardware specifications are provided.
Software Dependencies No The paper does not include experiments. As no experiments are conducted, no specific software dependencies are provided.
Experiment Setup No The paper does not include experiments. As no experiments are conducted, no experimental setup details are provided.