A Limitation of the PAC-Bayes Framework

Authors: Roi Livni, Shay Moran

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this manuscript we present a limitation for the PAC-Bayes framework. We demonstrate an easy learning task which is not amenable to a PAC-Bayes analysis. Specifically, we consider the task of linear classification in 1D; it is well-known that this task is learnable using just O(log(1/δ)/ϵ) examples. On the other hand, we show that this fact can not be proved using a PAC-Bayes analysis: for any algorithm that learns 1-dimensional linear classifiers there exists a (realizable) distribution for which the PAC-Bayes bound is arbitrarily large.
Researcher Affiliation Collaboration Roi Livni Department of Electrical Engineering Tel-Aviv University Israel rlivni@tauex.tau.ac.il Shay Moran Department of Mathematics Technion, Haifa Israel shaymoran1@gmail.com... Part of this work was done while the author was at Google Research.
Pseudocode No The paper describes an algorithm 'A' for analytical purposes but does not provide it in a structured pseudocode block or algorithm format.
Open Source Code No The paper is theoretical and does not mention providing open-source code for its methodology.
Open Datasets No This is a theoretical paper and does not describe empirical experiments or provide access to any specific datasets.
Dataset Splits No This is a theoretical paper and does not describe empirical experiments or provide specific dataset split information.
Hardware Specification No This is a theoretical paper and does not mention any hardware specifications for running experiments.
Software Dependencies No This is a theoretical paper and does not mention any specific software dependencies with version numbers for running experiments.
Experiment Setup No This is a theoretical paper and does not describe any experimental setup details such as hyperparameters or training configurations.