Risk and Regret of Hierarchical Bayesian Learners

Authors: Jonathan Huggins, Josh Tenenbaum

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We present a set of analytical tools for understanding hierarchical priors in both the online and batch learning settings. We provide regret bounds under log-loss, which show how certain hierarchical models compare, in retrospect, to the best single model in the model class. We also show how to convert a Bayesian log-loss regret bound into a Bayesian risk bound for any bounded loss, a result which may be of independent interest. Risk and regret bounds for Student s t and hierarchical Gaussian priors allow us to formalize the concepts of robustness and sharing statistical strength.
Researcher Affiliation Academia Jonathan H. Huggins JHUGGINS@MIT.EDU Computer Science and Artificial Intelligence Laboratory, MIT Joshua B. Tenenbaum JBT@MIT.EDU Brain and Cognitive Science Department, MIT
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not mention the release of any open-source code for the methodology described.
Open Datasets No The paper refers to datasets used in other research (e.g., Salakhutdinov et al., 2011) for context and examples, but it does not conduct its own experiments or provide access information for any dataset it uses for training its own models.
Dataset Splits No The paper does not describe any specific dataset split information (e.g., percentages, sample counts, or citations to predefined splits) for reproducibility, as it does not conduct empirical experiments.
Hardware Specification No The paper does not mention any specific hardware used for its research, as it primarily focuses on theoretical analysis.
Software Dependencies No The paper does not list any specific software dependencies with version numbers, as it does not report on empirical experiments requiring such details.
Experiment Setup No The paper does not provide details about an experimental setup, hyperparameters, or training configurations, as its contributions are theoretical.