Unifying lower bounds on prediction dimension of convex surrogates

Authors: Jessica Finocchiaro, Rafael Frongillo, Bo Waggoner

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We derive the first non-trivial lower bounds on the prediction dimension of proper composite losses. Our lower bounds unify and generalize several results in the literature, including for example the previously established lower bounds for the squared, logistic, and hinge losses. Our analysis provides a new way to prove lower bounds on the prediction dimension, and we apply it to several new losses including the truncated quadratic loss and a smoothed version of the piecewise linear hinge loss.
Researcher Affiliation Industry Prabhanjan Kambadur, Hanieh Hashemi, Vitaly Feldman. Google DeepMind, Google, Google Research
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statements about releasing open-source code or links to a code repository for the described methodology.
Open Datasets No This is a theoretical paper that does not use datasets for training or evaluation, therefore no dataset availability information is provided.
Dataset Splits No This is a theoretical paper and does not involve data splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any experimental hardware specifications.
Software Dependencies No The paper is theoretical and does not describe any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not provide details on an experimental setup, hyperparameters, or training configurations.