Optimistic Bounds for Multi-output Learning
Authors: Henry Reeve, Ata Kaban
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We investigate the challenge of multi-output learning... We then show that the self-bounding Lipschitz condition gives rise to optimistic bounds for multi-output learning... The proof exploits local Rademacher complexity combined with a powerful minoration inequality... As an application we derive a state-of-the-art generalisation bound for multi-class gradient boosting. Our objective is to provide a general framework for establishing guarantees for multi-output prediction problems. A fundamental challenge in the statistical learning theory of multi-output prediction is to obtain bounds that allow for (i) favourable convergence rate with the sample size, and (ii) favourable dependence of the risk on the dimensionality of the output space. |
| Researcher Affiliation | Academia | Henry W.J. Reeve 1 Ata Kab an 1 1School of Computer Science, University of Birmingham. Correspondence to: Henry W.J. Reeve <henrywjreeve@gmail.com>. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating the release of open-source code for the described methodology. |
| Open Datasets | No | This paper is theoretical and does not use datasets for empirical evaluation, thus no information about publicly available training datasets is provided. |
| Dataset Splits | No | This paper is theoretical and does not report empirical experiments, therefore no training/validation/test dataset splits are provided. |
| Hardware Specification | No | This paper is theoretical and does not report empirical experiments, therefore no hardware specifications are mentioned. |
| Software Dependencies | No | This paper is theoretical and does not report empirical experiments; therefore, no specific software dependencies with version numbers are listed. |
| Experiment Setup | No | This paper is theoretical and does not report empirical experiments, therefore no experimental setup details such as hyperparameters or training settings are provided. |