A Universal Growth Rate for Learning with Smooth Surrogate Losses

Authors: Anqi Mao, Mehryar Mohri, Yutao Zhong

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper presents a comprehensive analysis of the growth rate of H-consistency bounds (and excess error bounds) for various surrogate losses used in classification. We prove a square-root growth rate near zero for smooth margin-based surrogate losses in binary classification, providing both upper and lower bounds under mild assumptions. This result also translates to excess error bounds.
Researcher Affiliation Collaboration Anqi Mao Courant Institute New York, NY 10012 aqmao@cims.nyu.edu Mehryar Mohri Google Research & CIMS New York, NY 10011 mohri@google.com Yutao Zhong Courant Institute New York, NY 10012 yutao@cims.nyu.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include experiments requiring code.
Open Datasets No The paper does not include experiments. As no experiments are conducted, no dataset is used for training.
Dataset Splits No The paper does not include experiments. As no experiments are conducted, no dataset split information is provided.
Hardware Specification No The paper does not include experiments. As no experiments are conducted, no hardware specifications are provided.
Software Dependencies No The paper does not include experiments. As no experiments are conducted, no specific software dependencies are provided.
Experiment Setup No The paper does not include experiments. As no experiments are conducted, no experimental setup details are provided.