PAC-Bayesian Bound for the Conditional Value at Risk

Authors: Zakaria Mhammedi, Benjamin Guedj, Robert C. Williamson

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper presents a generalization bound for learning algorithms that minimize the CVAR of the empirical loss. The bound is of PAC-Bayesian type and is guaranteed to be small when the empirical CVAR is small. We achieve this by reducing the problem of estimating CVAR to that of merely estimating an expectation. This then enables us, as a by-product, to obtain concentration inequalities for CVAR even when the random variable in question is unbounded.
Researcher Affiliation Collaboration Zakaria Mhammedi The Australian National University and Data61 Benjamin Guedj Inria and University College London Robert C. Williamson bobwilliamsonoz@icloud.com
Pseudocode No The paper contains mathematical derivations, lemmas, and theorems but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code (e.g., specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described.
Open Datasets No The paper is theoretical and does not specify a publicly available or open dataset with concrete access information for empirical evaluation.
Dataset Splits No The paper is theoretical and does not describe experimental procedures, thus no specific dataset split information for training, validation, or test is provided.
Hardware Specification No The paper is theoretical and does not describe any experiments, therefore no specific hardware details are provided.
Software Dependencies No The paper is theoretical and does not describe any experiments, therefore no specific ancillary software details with version numbers are provided.
Experiment Setup No The paper is theoretical and does not describe any experiments, therefore no specific experimental setup details, such as hyperparameters or training configurations, are provided.