Quantifying Learning Guarantees for Convex but Inconsistent Surrogates
Authors: Kirill Struminsky, Simon Lacoste-Julien, Anton Osokin
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our key technical contribution consists in a new lower bound on the calibration function for the quadratic surrogate, which is non-trivial (not always zero) for inconsistent cases. The main technical contribution consists in a tighter lower bound on the calibration function (Theorem 3), which is strictly more general than the bound of [14]. |
| Researcher Affiliation | Academia | Kirill Struminsky NRU HSE, Moscow, Russia Simon Lacoste-Julien MILA and DIRO Université de Montréal, Canada Anton Osokin NRU HSE, Moscow, Russia Skoltech, Moscow, Russia National Research University Higher School of Economics CIFAR Fellow Samsung-HSE Joint Lab Skolkovo Institute of Science and Technology |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or a link for the open-source code of the described methodology. |
| Open Datasets | No | The paper performs theoretical analysis and numerical computations on abstract loss functions (tree-structured loss, mAP loss) rather than using specific publicly available datasets for training or evaluation. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical model training, thus it does not provide information about dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper does not explicitly describe any specific hardware used for its computations or analysis. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as library names with version numbers, required to replicate its computations. |
| Experiment Setup | No | The paper focuses on theoretical analysis and numerical computation of bounds, and therefore does not describe a conventional experimental setup with hyperparameters or training configurations. |