Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

PAC-Bayes Generalisation Bounds for Heavy-Tailed Losses through Supermartingales

Authors: Maxime Haddouche, Benjamin Guedj

TMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We contribute PAC-Bayes generalisation bounds for heavy-tailed losses under the sole assumption of bounded variance of the loss function. Under that assumption, we extend previous results from Kuzborskij and Szepesvári (2019). Our key technical contribution is exploiting an extention of Markov s inequality for supermartingales. Our proof technique unifies and extends different PACBayesian frameworks by providing bounds for unbounded martingales as well as bounds for batch and online learning with heavy-tailed losses.
Researcher Affiliation Academia Maxime Haddouche EMAIL Inria, University College London and Université de Lille Benjamin Guedj EMAIL Inria and University College London
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks. It focuses on theoretical proofs and mathematical derivations.
Open Source Code No The paper does not contain any statement about releasing source code, nor does it provide any links to a code repository.
Open Datasets No The paper is theoretical and does not use or refer to any specific publicly available datasets for experimental evaluation.
Dataset Splits No The paper is theoretical and does not involve experimental evaluation on specific datasets that would require splits.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or hardware used for computations.
Software Dependencies No The paper is theoretical and does not specify any software dependencies or version numbers for replicating experiments.
Experiment Setup No The paper is theoretical and does not detail any experimental setup, hyperparameters, or training configurations.