On the Complexity of PAC Learning in Hilbert Spaces

Authors: Sergei Chubanov

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our contribution to this line of research is the following: First, we propose a new algorithm allowing to improve existing bounds on the running time of PAC learning algorithms for the finite-dimensional case. Second, our algorithm is applicable to infinite-dimensional Hilbert spaces, in which case it guarantees similar complexity bounds in terms of a model of computation including inner products as elementary operations. ... Theorem 1 If there is a consistent γ-separating t-polyhedron, then the running time of Algorithm 2 is bounded by m O(tγ 2 logm(γ 1) ).
Researcher Affiliation Industry Sergei Chubanov Bosch Center for Artificial Intelligence, Germany sergei.chubanov@de.bosch.com
Pseudocode Yes Algorithm 1: Linear-programming (LP) algorithm; Algorithm 2: Separation algorithm; Algorithm 3: Improper-separation algorithm
Open Source Code No The paper does not provide any specific links or explicit statements regarding the availability of open-source code for the described methodology.
Open Datasets No The paper is theoretical and defines an 'instance space X' and concepts like 'drawing a sample', but does not refer to or provide access information for any specific public dataset.
Dataset Splits No The paper is theoretical and does not describe empirical experiments with dataset splits for training, validation, or testing.
Hardware Specification No The paper describes theoretical algorithms and their complexity bounds, and does not report on empirical experiments, therefore no hardware specifications are provided.
Software Dependencies No The paper describes theoretical algorithms and their complexity, and does not report on empirical experiments requiring specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on algorithm design and complexity analysis; it does not provide details regarding experimental setup, hyperparameters, or training configurations.