Generalization Error Bounds on Deep Learning with Markov Datasets

Authors: Lan V. Truong

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we derive upper bounds on generalization errors for deep neural networks with Markov datasets. These bounds are developed based on Koltchinskii and Panchenko s approach for bounding the generalization error of combined classifiers with i.i.d. datasets. The development of new symmetrization inequalities in high-dimensional probability for Markov chains is a key element in our extension, where the absolute spectral gap of the infinitesimal generator of the Markov chain plays a key parameter in these inequalities.
Researcher Affiliation Academia Lan V. Truong Department of Engineering University of Cambridge Cambridge, CB2 1PZ lt407@cam.ac.uk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets No The paper is theoretical and does not describe experiments using a specific dataset, nor does it provide access information for a publicly available dataset.
Dataset Splits No The paper is theoretical and does not describe experiments, therefore no specific dataset split information for validation is provided.
Hardware Specification No The paper does not provide specific hardware details, as it is a theoretical paper and does not describe experiments.
Software Dependencies No The paper does not provide specific ancillary software details, as it is a theoretical paper and does not describe experiments.
Experiment Setup No The paper does not contain specific experimental setup details, as it is a theoretical paper and does not describe experiments.