A unified framework for information-theoretic generalization bounds

Authors: Yifeng Chu, Maxim Raginsky

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper presents a general methodology for deriving information-theoretic generalization bounds for learning algorithms. The main technical tool is a probabilistic decorrelation lemma based on a change of measure and a relaxation of Young s inequality in Lψp Orlicz spaces.
Researcher Affiliation Academia Department of Electrical and Computer Engineering and Coordinated Science Laboratory, University of Illinois, Urbana, IL 61801, USA.
Pseudocode No The paper is highly theoretical and does not include any pseudocode or algorithm blocks.
Open Source Code No The paper is theoretical and does not mention releasing any open-source code for the methodology described.
Open Datasets No The paper describes theoretical work and does not use or mention any datasets for training.
Dataset Splits No The paper describes theoretical work and does not mention any dataset splits for validation.
Hardware Specification No The paper describes theoretical work and does not mention any hardware specifications used for experiments.
Software Dependencies No The paper describes theoretical work and does not list any specific software dependencies with version numbers.
Experiment Setup No The paper describes theoretical work and does not involve an experimental setup with hyperparameters or training settings.