Concentration Inequalities for General Functions of Heavy-Tailed Random Variables

Authors: Shaojie Li, Yong Liu

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we obtain unbounded analogues of the popular bounded difference inequality for functions of independent random variables with heavy-tailed distributions. The main results provide a general framework applicable to all heavy-tailed distributions with finite variance. To illustrate the strength of our results, we present applications to sub-exponential tails, sub-Weibull tails, and heavier polynomially decaying tails. Applied to some standard problems in statistical learning theory (vector valued concentration, Rademacher complexity, and algorithmic stability), we show that these inequalities allow an extension of existing results to heavytailed distributions up to finite variance.
Researcher Affiliation Academia 1Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 2Beijing Key Laboratory of Big Data Management and Analysis Methods, Beijing, China.
Pseudocode No The paper contains mathematical derivations, theorems, and proofs but no structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide links to a code repository.
Open Datasets No The paper is theoretical and does not involve empirical experiments with datasets, thus no dataset information for training is provided.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with datasets, thus no validation split information is provided.
Hardware Specification No The paper is theoretical and does not describe any experiments that would require hardware specifications.
Software Dependencies No The paper is theoretical and does not describe any experiments that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe any experimental setup, hyperparameters, or training configurations.