Concentration Inequalities for Conditional Value at Risk

Authors: Philip Thomas, Erik Learned-Miller

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In order to better visualize the benefits of our new inequalities relative to those of Brown (2007), we conducted a series of empirical comparisons. The results of these comparisons are presented in Figure 8.
Researcher Affiliation Academia 1College of Information and Computer Sciences, University of Massachusetts Amherst.
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statements about releasing code or links to a code repository.
Open Datasets No The paper describes using generated samples from various distributions (log-normal, beta) for empirical comparisons, but it does not refer to a specific publicly available dataset with concrete access information (e.g., a link or formal citation).
Dataset Splits No The paper discusses empirical comparisons of inequalities using generated samples (e.g., "n = 10,000 samples"), but it does not specify train/validation/test dataset splits typically used for machine learning model training and evaluation.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper does not list any specific software components or libraries with their version numbers.
Experiment Setup Yes In all cases, unless otherwise specified, we always used n = 10,000 samples, α = 0.05, and δ = 0.05. The sixth and seventh rows of Figure 8 show how the upper and lower bounds change as the amount of data, n, is varied.